modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
corzamennav/blockassist-bc-territorial_wild_antelope_1758761671
corzamennav
2025-09-25T00:55:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-25T00:55:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nvidia/Audio2Face-3D-v2.3.1-James
nvidia
2025-09-25T00:54:35Z
0
0
null
[ "onnx", "arxiv:2508.16401", "license:other", "region:us" ]
null
2025-07-03T01:51:03Z
--- license: other license_name: nvidia-open-model-license license_link: >- https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/ --- ## Model Overview ### Description: Audio2Face-3D-v2.3.1-James generates 3D facial animations from audio inputs, for use in applications such as video conferencing, virtual reality, and digital content creation. This model is ready for commercial/non-commercial use. <br> ### License/Terms of Use Use of this model is governed by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) <br> ### Deployment Geography: Global <br> ### Use Case: <br> Audio2Face-3D-v2.3.1-James is designed for developers and researchers working on audio-driven animation and emotion detection applications, such as virtual assistants, chatbots, and affective computing systems. <br> ### Release Date: <br> Hugging Face: 09/24/2025 via https://huggingface.co/nvidia/Audio2Face-3D-v2.3.1-James <br> ## References(s): NVIDIA, Audio2Face-3D: Audio-driven Realistic Facial Animation For Digital Avatars, 2025. <br> [https://arxiv.org/abs/2508.16401](https://arxiv.org/abs/2508.16401) ## Model Architecture: **Architecture Type:** Transformer, CNN <br> **Network Architecture:** Wav2vec2.0 <br> **Number of model parameters:** 3.98x10^7 <br> ## Input: <br> **Input Type(s):** Audio <br> **Input Format:** Array of float <br> **Input Parameters:** One-Dimensional (1D) <br> **Other Properties Related to Input:** All audio is resampled to 16KHz ## Output: <br> **Output Type(s):** Facial pose <br> **Output Format:** Array of float <br> **Output Parameters:** One-Dimensional (1D) <br> **Other Properties Related to Output:** Facial pose on skin, tongue, jaw, and eyeballs <br> Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br> ## Software Integration: **Runtime Engine(s):** * Audio2Face-SDK <br> **Supported Hardware Microarchitecture Compatibility:** <br> * NVIDIA Ampere <br> * NVIDIA Blackwell <br> * NVIDIA Hopper <br> * NVIDIA Lovelace <br> * NVIDIA Pascal <br> * NVIDIA Turing <br> **Preferred/Supported Operating System(s):** * Linux <br> * Windows <br> The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment. <br> This AI model can be embedded as an Application Programming Interface (API) call into the software environment described above. <br> ## Model Version(s): Audio2Face-3D-v2.3.1-James <br> ## Training, Testing, and Evaluation Datasets: ## Training Dataset: Data Modality <br> * Audio <br> * 3D facial motion <br> Audio Training Data Size <br> * Less than 10,000 Hours <br> Data Collection Method by dataset <br> * Human - 3D facial motion data and audio <br> Labeling Method by dataset <br> * Human - Commercial capture solution and internal labeling <br> **Properties (Quantity, Dataset Descriptions, Sensor(s)):** Audio and 3D facial motion from multiple speech sequences <br> ### Testing Dataset: Data Collection Method by dataset: <br> * Human - 3D facial motion data and audio <br> Labeling Method by dataset: <br> * Human - Commercial capture solution and internal labeling <br> **Properties (Quantity, Dataset Descriptions, Sensor(s)):** Audio and 3D facial motion from multiple speech sequences <br> ### Evaluation Dataset: Data Collection Method by dataset: <br> * Human - 3D facial motion data and audio <br> Labeling Method by dataset: <br> * Human - Commercial capture solution and internal labeling <br> **Properties (Quantity, Dataset Descriptions, Sensor(s)):** Audio and 3D facial motion from multiple speech sequences <br> ## Inference: **Acceleration Engine:** TensorRT <br> **Test Hardware:** <br> * T4, T10, A10, A40, L4, L40S, A100 <br> * RTX 6000ADA, A6000, Pro 6000 Blackwell <br> * RTX 3080, 3090, 4080, 4090, 5090 <br> ## Ethical Considerations: NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Bias, Explainability, Safety & Security, and Privacy Subcards. Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/) ## Bias Field | Response :---------------------------------------------------------------------------------------------------|:--------------- Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None Measures taken to mitigate against unwanted bias: | None ## Explainability Field | Response :------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------- Intended Task/Domain: | Customer Service, Media & Entertainment Model Type: | Transformer, CNN Intended Users: | Interactive avatar developers, Digital content creators Output: | Facial pose Describe how the model works: | Audio input is encoded and concatenated with emotion label, then passed into CNN-based pose decoder to output facial pose. Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable Technical Limitations & Mitigation: | This model may not work well with poor audio input. Verified to have met prescribed NVIDIA quality standards: | Yes Performance Metrics: | Lipsync accuracy, Latency, Throughput Potential Known Risks: | This model may generate inaccurate lip poses given low-quality audio input. Licensing: | Use of this model is governed by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) ## Privacy Field | Response :----------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------- Generatable or reverse engineerable personal data? | No Personal data used to create this model? | Yes Was consent obtained for any personal data used? | Yes How often is dataset reviewed? | Before Release Is a mechanism in place to honor data subject right of access or deletion of personal data? | Yes If personal data was collected for the development of the model, was it collected directly by NVIDIA? | Yes If personal data was collected for the development of the model by NVIDIA, do you maintain or have access to disclosures made to data subjects? | Yes If personal data was collected for the development of this AI model, was it minimized to only what was required? | Yes Is there provenance for all datasets used in training? | Yes Does data labeling (annotation, metadata) comply with privacy laws? | Yes Is data compliant with data subject requests for data correction or removal, if such a request was made? | Yes Applicable Privacy Policy | https://www.nvidia.com/en-us/about-nvidia/privacy-policy/ ## Safety & Security Field | Response :---------------------------------------------------|:---------------------------------- Model Application Field(s): | Customer Service, Media & Entertainment Describe the life critical impact (if present). | Not Applicable Use Case Restrictions: | Abide by [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. ## Citation ``` @misc{nvidia2025audio2face3d, title={Audio2Face-3D: Audio-driven Realistic Facial Animation For Digital Avatars}, author={Chaeyeon Chung and Ilya Fedorov and Michael Huang and Aleksey Karmanov and Dmitry Korobchenko and Roger Ribera and Yeongho Seol}, year={2025}, eprint={2508.16401}, archivePrefix={arXiv}, primaryClass={cs.GR}, url={https://arxiv.org/abs/2508.16401}, note={Authors listed in alphabetical order} } ```
HectorHe/Qwen1.5-MOE-sft-coommonsense15k-aux-free-1e-5-share-expert
HectorHe
2025-09-25T00:53:04Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_moe", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "dataset:fw407/Commonsense-15K", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T19:51:29Z
--- datasets: fw407/Commonsense-15K library_name: transformers model_name: Qwen1.5-MOE-sft-coommonsense15k-aux-free-1e-5-share-expert tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for Qwen1.5-MOE-sft-coommonsense15k-aux-free-1e-5-share-expert This model is a fine-tuned version of [None](https://huggingface.co/None) on the [fw407/Commonsense-15K](https://huggingface.co/datasets/fw407/Commonsense-15K) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="HectorHe/Qwen1.5-MOE-sft-coommonsense15k-aux-free-1e-5-share-expert", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hector_-carnegie-mellon-university/huggingface/runs/hds14sxt) This model was trained with SFT. ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.51.0 - Pytorch: 2.6.0 - Datasets: 4.1.1 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
DavidAU/Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B
DavidAU
2025-09-25T00:52:06Z
9
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "programming", "code generation", "code", "coding", "coder", "chat", "brainstorm", "qwen", "qwencoder", "brainstorm 20x", "creative", "all uses cases", "Jan-V1", "float32", "horror", "32 bit precision", "science fiction", "fantasy", "Star Trek", "finetune", "thinking", "reasoning", "unsloth", "conversational", "en", "dataset:progs2002/star-trek-tng-scripts", "arxiv:2401.02415", "base_model:DavidAU/Qwen3-Great-Bowels-Of-Horror-FREAKSTORM-6B", "base_model:finetune:DavidAU/Qwen3-Great-Bowels-Of-Horror-FREAKSTORM-6B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-21T13:06:03Z
--- license: apache-2.0 base_model: - DavidAU/Qwen3-Great-Bowels-Of-Horror-FREAKSTORM-6B datasets: - progs2002/star-trek-tng-scripts language: - en pipeline_tag: text-generation tags: - programming - code generation - code - coding - coder - chat - code - chat - brainstorm - qwen - qwen3 - qwencoder - brainstorm 20x - creative - all uses cases - Jan-V1 - float32 - horror - 32 bit precision - science fiction - fantasy - Star Trek - finetune - thinking - reasoning - unsloth library_name: transformers --- <small><font color="red">Caution:</font> The training for this model is intense enough to alter "real world facts", and bring them in part into the ST/TNG Universe. </small> <h2>Qwen3-ST-The-Next-Generation-II-FreakStorm2-E32-v1-256k-ctx-6B</h2> <img src="locutus-borg.gif" style="float:right; width:300px; height:300px; padding:10px;"> This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly. This model is specifically for TNG / Star Trek, science fiction, story generation (all genres) but also does coding and general tasks too. This model can also be used for Role play. Example generations at the bottom of this page. This is a far stronger fine tune, taking you deeper into the ST-TNG universe, than version 1 and the 32 bit precision version of II. This is a Star Trek The Next Generation ( Special thanks to "progs2002" for this FANTASIC dataset) fine tune (11% of the model, close to 700 million parameters), 6 epochs on this model (a 4B model + Brainstorm 20x adapter): https://huggingface.co/DavidAU/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x Then the modified "Freakstorm Horror" model was used as a base to further enhance the model [see model tree, benchmarks avail]. Information on the ORG Jan V1 4B (model info below), followed by Brainstorm 20x adapter (by DavidAU) and then a complete help section for running LLM / AI models. This model has 55 layers, and 608 tensors. The Brainstorm adapter improves creavity, code generation, and unique code solving abilities. The fine tuning alters the prose generation and general creative abilities the "TNG" Universe. The fine tuning (using Unsloth for Win 11) also affects the Brainstorm adapter too. Model's thinking / reasoning are not affected either - they are fully intact. For creative uses: Increases depth, detail and general "there" in the prose. Example for creative at bottom of the page. This model requires: - Jinja (embedded) or CHATML template - Max context of 256k. Settings used for testing (suggested): - Temp .3 to .7 (but .8 to 1.5 for creative) - Rep pen 1.05 to 1.1 - Topp .8 , minp .05 - Topk 20 - Min context of 8k for thinking / output. - No system prompt. This model will respond well to both detailed instructions and step by step refinement and additions to code. Likewise for creative use cases. Here is a review of this model's operations: https://www.linkedin.com/posts/gchesler_nightmediaqwen3-jan-v1-256k-ctx-6b-brainstorm20x-q6-activity-7364301711529709570-CiAn As this is an instruct model, it will also benefit from a detailed system prompt too. For simpler coding problems, lower quants will work well; but for complex/multi-step problem solving suggest Q6 or Q8. --- <B>QUANTS:</b> --- GGUF? GGUF Imatrix? Other? Special thanks to Team Mradermacher, Team Nightmedia and other quanters! See under "model tree", upper right and click on "quantizations". New quants will automatically appear. --- <h2>About Jan V1</h2> --- # Jan-v1: Advanced Agentic Language Model [![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?logo=github)](https://github.com/menloresearch/deep-research) [![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://opensource.org/licenses/Apache-2.0) [![Jan App](https://img.shields.io/badge/Powered%20by-Jan%20App-purple?style=flat&logo=android)](https://jan.ai/) <!-- Optional: If you have a GIF for Jan-v1, include it here like Lucy's. --> <!-- ![image/gif](jan_v1_demo.gif) --> ## Overview **Jan-v1** is the first release in the **Jan Family**, designed for agentic reasoning and problem-solving within the [Jan App](https://jan.ai/). Based on our [**Lucy**](https://huggingface.co/Menlo/Lucy) model, Jan-v1 achieves improved performance through model scaling. Jan-v1 uses the [Qwen3-4B-thinking](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) model to provide enhanced reasoning capabilities and tool utilization. This architecture delivers better performance on complex agentic tasks. ## Performance ### Question Answering (SimpleQA) For question-answering, Jan-v1 shows a significant performance gain from model scaling, achieving 91.1% accuracy. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/1_9-r0GpBACSsWoUVZ1Xg.png) *The 91.1% SimpleQA accuracy represents a significant milestone in factual question answering for models of this scale, demonstrating the effectiveness of our scaling and fine-tuning approach.* ### Chat Benchmarks These benchmarks evaluate the model's conversational and instructional capabilities. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/f3bzNYRuA_iTFQIcvu6Rr.png) ## Quick Start ### Integration with Jan App Jan-v1 is optimized for direct integration with the [Jan App](https://jan.ai/). Simply select the model from the Jan App interface for immediate access to its full capabilities. ![image/gif](demo.gif) ### Local Deployment **Using vLLM:** ```bash vllm serve janhq/Jan-v1-4B \ --host 0.0.0.0 \ --port 1234 \ --enable-auto-tool-choice \ --tool-call-parser hermes ``` **Using llama.cpp:** ```bash llama-server --model jan-v1.gguf \ --host 0.0.0.0 \ --port 1234 \ --jinja \ --no-context-shift ``` ### Recommended Parameters ```yaml temperature: 0.6 top_p: 0.95 top_k: 20 min_p: 0.0 max_tokens: 2048 ``` ## 🤝 Community & Support - **Discussions**: [HuggingFace Community](https://huggingface.co/janhq/Jan-v1-4B/discussions) <!-- Update with your HF model ID --> - **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/) ## (*) Note By default we have system prompt in chat template, this is to make sure the model having the same performance with the benchmark result. You can also use the vanilla chat template without system prompt in the file [chat_template_raw.jinja](https://huggingface.co/janhq/Jan-v1-4B/blob/main/chat_template_raw.jinja). See more here: https://huggingface.co/janhq/Jan-v1-4B-GGUF --- <H2>What is Brainstorm?</H2> --- <B>Brainstorm 20x</B> The BRAINSTORM process was developed by David_AU. Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415"> scientific paper : Progressive LLaMA with Block Expansion </a>. However I went in a completely different direction from what was outlined in this paper. What is "Brainstorm" ? The reasoning center of an LLM is taken apart, reassembled, and expanded. In this case for this model: 20 times Then these centers are individually calibrated. These "centers" also interact with each other. This introduces subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak. The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting instruction following. This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses. Here are some of the enhancements this process brings to the model's performance: - Prose generation seems more focused on the moment to moment. - Sometimes there will be "preamble" and/or foreshadowing present. - Fewer or no "cliches" - Better overall prose and/or more complex / nuanced prose. - A greater sense of nuance on all levels. - Coherence is stronger. - Description is more detailed, and connected closer to the content. - Simile and Metaphors are stronger and better connected to the prose, story, and character. - Sense of "there" / in the moment is enhanced. - Details are more vivid, and there are more of them. - Prose generation length can be long to extreme. - Emotional engagement is stronger. - The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less. - The MORE instructions and/or details you provide the more strongly the model will respond. - Depending on the model "voice" may be more "human" vs original model's "voice". Other "lab" observations: - This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true! - However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak. - From lab testing it seems to ponder, and consider more carefully roughly speaking. - You could say this process sharpens the model's focus on it's task(s) at a deeper level. The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc. --- For more information / other Qwen/Mistral Coders / additional settings see: [ https://huggingface.co/DavidAU/Qwen2.5-MOE-2x-4x-6x-8x__7B__Power-CODER__19B-30B-42B-53B-gguf ] --- <H2>Help, Adjustments, Samplers, Parameters and More</H2> --- <B>CHANGE THE NUMBER OF ACTIVE EXPERTS:</B> See this document: https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts <B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B> In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ; Set the "Smoothing_factor" to 1.5 : in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F" : in text-generation-webui -> parameters -> lower right. : In Silly Tavern this is called: "Smoothing" NOTE: For "text-generation-webui" -> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model) Source versions (and config files) of my models are here: https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be OTHER OPTIONS: - Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor") - If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted. <B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B> This a "Class 1" model: For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] --- <H2>Examples, Q4_K_S, Temp .8</H2> This will be low to mid-range quality, expect better at higher quants / imatrix quants. Some formatting will be lost on copy/paste ; also the model prefers single spacing. --- <B> Who is Q from Star Trek ? Give a detailed breakdown. </B> [[[ thinking start ]]] Okay, the user is asking "Who is Q from Star Trek?" and wants a detailed breakdown. Hmm, they're probably a Star Trek fan who either just heard about Q or is doing research. Could be new to the franchise too—better cover basics thoroughly. First thought: Q's such a unique character that I should emphasize how bizarre he is right away. Not just "a godlike being" but... alien to human concepts of power and morality. That'll hook them. Wait—they said "detailed breakdown," so I'll need to structure it logically: - Core concept (what makes Q Q) - Physical appearance (everyone loves the visual) - Personality quirks (his randomness is key) - Powers (must explain why he's OP beyond just "magic") - Role in canon (not just "he exists" but how he drives plotlines) - Why he matters to Trek's themes (existential stuff!) User might not realize Q's influence goes beyond the original series... better mention later shows too. And oh! The "Q Continuum" angle—some fans get obsessed with that. Should clarify it's not a single entity but a collective consciousness. Biggest trap to avoid: making Q sound like a cartoon villain. He's more of a chaotic force of nature. Gotta stress his detachment from human concerns—that's what makes him terrifyingly interesting. Also... should I address fan theories? Nah, unless asked. Stick to canon. But I'll hint that Q's motives remain mysterious ("no definitive answers") since that's part of the appeal. checks mental notes Oh! Must include: - How he first appeared (TOS pilot) - His famous "Q's the funniest guy alive" line from TNG - The "Q Continuum" being a single consciousness vs. collective mind debate - That time he caused the Enterprise to malfunction because he thought it was funny ...Is the user writing fanfic? If so, they'll need those lore details. Better over-deliver just in case. Final thought: Emphasize why Q is more than just a star trek character. He represents how absurd life can be—something Trek often explores through other characters too. That's what'll make this answer feel substantial. [[[ thinking end ]]] Q from Star Trek: A Deep Dive into the "Most Random Entity in the Universe" Q is one of the most iconic, enigmatic, and endlessly fascinating characters in Star Trek history. Created by Gene Roddenberry for Star Trek: The Original Series (TOS), Q evolved into a cornerstone of the franchise across multiple series (TNG, DS9, VOY, ENT, Picard, Discovery, etc.). Here’s a detailed breakdown: 1. Core Concept: Who is Q? - The "Q Continuum": Q isn't a single being but part of an entirely alien collective consciousness known as the Q Continuum. This entity exists beyond human comprehension—a cosmic force of pure chaos, randomness, and intellectual playfulness. The Q Continuum manifests physically as one or more individuals (like Q) who act as "channelers" or "mouthpieces" for its will. - The Essence of Q: Q embodies the absurdity of existence. He is: - Unbound by logic: His actions defy causality, physics, and even morality. - Playfully malicious: He enjoys frustrating humans and other species through capricious pranks, mind games, and unsolvable puzzles. - Existentially detached: Q views humanity as primitive, amusing curiosities—like children playing with toy cars on a beach. He has no inherent need to "help" or "save" them. Why "Q"? The name was chosen by Gene Roddenberry from the first letter of "Quantum," reflecting Q's role as an omnipotent force manipulating reality at the quantum level. 2. Physical Appearance & Presence - Visual Identity: Q is a humanoid figure with: - Skin color: A shifting, iridescent palette (like liquid mercury). - Hair: Long, flowing locks that constantly change color and texture. - Eyes: Two massive, lidless orbs that glow with an otherworldly light. - Voice: A high-pitched, raspy tone that can shift pitch and volume unpredictably. - Physique: Slender but imposing—seemingly weightless yet unnervingly powerful. Key Quirks: - Constant motion: Q never stands still. He spins, leaps, gestures wildly, or contorts his body in bizarre ways. - Mental "twitches": His head jerks erratically; his voice cracks and breaks mid-sentence. - Unpredictable moods: One moment he's jovial and teasing; the next he's coldly detached or even malevolent. Why it matters: Q’s ever-changing appearance reinforces his alien nature. He doesn’t fit into any human archetype—making him both unsettling and fascinating. 3. Powers & Abilities Q possesses godlike capabilities that defy human understanding: - Power Category Examples from Canon - Reality Manipulation Alters physical laws (e.g., making a ship vanish or materialize objects mid-air); rewinds time; creates illusions. - Telekinesis Moves objects with his mind alone. - Psionic Abilities Controls minds ("mind melds"), reads thoughts, projects images directly into the senses. - Time Travel Moves freely between past and future timelines (e.g., traveling back to TOS-era Earth). - Energy Manipulation Generates and absorbs energy fields; creates force fields; weaponizes plasma. - Teleportation Instantly transports himself or others across vast distances. - Intellectual Superiority Q can solve problems in seconds that would take humans centuries. He often challenges humans to "trivial" puzzles designed to frustrate them. Why it matters: Q’s powers are not about domination—they’re about play. He enjoys messing with humans by making them solve problems they couldn’t possibly solve. This makes him uniquely terrifying—because he can’t be reasoned with or threatened. 4. Personality & Motivations Q is a chaotic force of nature with no clear agenda: - Core Motivation: To amuse himself at humanity’s expense. He views human struggles as "funny" and enjoys seeing them fail in ways that highlight their weaknesses or absurdities. - No Morality: Q has no concept of good or evil. His actions are driven purely by whimsy, curiosity, or boredom—not malice. - Intellectual Curiosity: Q often challenges humans to solve problems that test their intellect (e.g., "Can you solve this equation?" or "What’s the answer to life, the universe, and everything?"). He’s fascinated by human limitations. - Detachment: Q feels no empathy for humanity. He sees them as insignificant compared to his own role in the cosmos. Famous Quotations: - "Q is the funniest guy alive." (TNG) - "I am not amused." - "You are an interesting species. But you have no idea how much you’re missing out on." - "I’ve been a god for eons... and I’m still just a joke." Why it matters: Q’s personality is why he’s so compelling—a reminder that even in a universe of gods, there’s room for humor and absurdity. 5. Role in Star Trek Canon TOS (1980s): The First Appearance Q first appears in the TOS pilot episode "The Conundrum" (1982), where he challenges Captain Picard to a game of chess using his mind alone. In a later TOS episode ("Q-Less"), Q creates a "Q-continuum" device that allows him to interact with Starfleet without revealing himself. TNG (1987–1994): The Defining Era Q becomes a recurring character in TNG, often working with the Enterprise crew to create mind-bending puzzles or challenges. Key Storylines: - "Q-Bot": Q creates a robot that can mimic human behavior but lacks true consciousness. - "The Q Continuum": The entire Q Continuum is revealed to be a single entity—a cosmic mind that controls all reality. - "The Inner Light": Q manipulates the Enterprise’s warp core to destroy it, forcing Picard to destroy his own ship to save Q’s "amusement." - Q’s Impact: TNG established Q as a major threat to humanity but also as a source of intellectual stimulation. The series explores Q’s relationship with Picard—especially their mutual respect despite their differences. Other Series: Expanding the Universe - DS9 ("The Inner Light"): Q intervenes in a war between two alien species. - VOY ("Q-Less"): Q creates a "Q-continuum" device that allows him to interact with Starfleet without revealing himself. - ENT ("The Q Continuum"): Q tries to manipulate the Enterprise crew into solving a puzzle involving time travel. - Picard (2019–present): Q is a recurring character in Picard’s timeline, often testing the captain’s intelligence. - Discovery (2020s): Q appears in the season 3 finale ("The Q Continuum"), where he forces Captain Burnham to solve a puzzle involving time travel. Why Q Matters to Star Trek Q represents the absurdity of existence—a reminder that life is inherently unpredictable and chaotic. His presence pushes Star Trek’s exploration of humanity beyond mere technology or diplomacy into the realm of philosophy, psychology, and existential crisis. Thematic Resonance: Q embodies Roddenberry’s belief that "the universe is a place of wonder and mystery." He forces the Enterprise crew to confront their own limitations—making them more human in the process. Legacy: Q is one of Star Trek’s most enduring characters—a testament to the power of imagination and storytelling. 6. Why Q Stands Apart from Other "Gods" in Star Trek While Star Trek has other omnipotent beings (like Riker’s "The Man"), Q is unique because: - He doesn’t represent a force of good or evil—only chaos and play. - His actions are never maliciously destructive—they’re always for entertainment or intellectual stimulation. - He doesn’t seek power—he exists to amuse himself and challenge humanity’s limits. - Q is the ultimate "what if?" character—a reminder that even in a universe of gods, there’s room for humor, absurdity, and wonder. Final Takeaway Q is more than just a "villain" or "god"—he’s a mirror to humanity itself. His random, chaotic nature forces us to confront our own insignificance in the cosmos while reminding us that life is, at its core, absurdly funny. In a universe where everything is possible, Q is the ultimate reminder: We’re all just playing with toy cars on the beach... until the tide comes in and takes them all away. If you're new to Star Trek, Q is an excellent entry point for understanding the show’s philosophical depth—and why it remains relevant today. For hardcore fans, he’s a character who keeps getting deeper and more complex with every appearance across the franchise. In short: Q is Star Trek’s ultimate jokester—a cosmic trickster who reminds us to laugh at life... even when everything else feels like hell. 🌌✨
ubergarm/DeepSeek-V3.1-Terminus-GGUF
ubergarm
2025-09-25T00:46:36Z
17
4
null
[ "gguf", "mla", "imatrix", "deepseek_v3", "conversational", "ik_llama.cpp", "text-generation", "base_model:deepseek-ai/DeepSeek-V3.1-Terminus", "base_model:quantized:deepseek-ai/DeepSeek-V3.1-Terminus", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T14:07:52Z
--- quantized_by: ubergarm pipeline_tag: text-generation base_model: deepseek-ai/DeepSeek-V3.1-Terminus license: mit base_model_relation: quantized tags: - mla - imatrix - deepseek_v3 - conversational - ik_llama.cpp --- ## `ik_llama.cpp` imatrix Quantizations of deepseek-ai/DeepSeek-V3.1-Terminus This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support the ik's latest SOTA quants and optimizations! Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc! *NOTE* `ik_llama.cpp` can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants. Some of ik's new quants are supported with [Nexesenex/croco.cpp](https://github.com/Nexesenex/croco.cpp) fork of KoboldCPP with Windows builds for CUDA 12.9. Also check for [Windows builds by Thireus here.](https://github.com/Thireus/ik_llama.cpp/releases) which have been CUDA 12.8. These quants provide best in class perplexity for the given memory footprint. ## Big Thanks Shout out to Wendell and the **Level1Techs** crew, the community [Forums](https://forum.level1techs.com/t/deepseek-deep-dive-r1-at-home/225826), [YouTube Channel](https://www.youtube.com/@Level1Techs)! **BIG thanks** for providing **BIG hardware** expertise and access to run these experiments and make these great quants available to the community!!! Also thanks to all the folks in the quanting and inferencing community on [BeaverAI Club Discord](https://huggingface.co/BeaverAI) and on [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/) for tips and tricks helping each other run, test, and benchmark all the fun new models! ## Quant Collection Perplexity computed against *wiki.test.raw*. ![Perplexity Chart](images/perplexity.png "Chart showing Perplexity improving as BPW increases.") This first is just a "pure" test quant for baseline perplexity comparison: * `Q8_0` 664.295 GiB (8.504 BPW) - Final estimate: PPL = 3.3929 +/- 0.01985 ## IQ5_K 464.062 GiB (5.941 BPW) Final estimate: PPL = 3.4000 +/- 0.01992 <details> <summary>👈 Secret Recipe</summary> ```bash #!/usr/bin/env bash custom=" ## Attention [0-60] (GPU) # attn_kv_b is only used for PP so keep it q8_0 for best speed and accuracy blk\..*\.attn_kv_b\.weight=q8_0 # ideally k_b and v_b are smaller than q8_0 as they are is used for TG with -mla 3 # https://github.com/ikawrakow/ik_llama.cpp/issues/651 # blk.*.attn_k_b.weight is not divisible by 256 so only supports iq4_nl or legacy qN_0 blk\..*\.attn_k_b\.weight=q8_0 blk\..*\.attn_v_b\.weight=q8_0 # Balance of attn tensors blk\..*\.attn_kv_a_mqa\.weight=q8_0 blk\..*\.attn_q_a\.weight=q8_0 blk\..*\.attn_q_b\.weight=q8_0 blk\..*\.attn_output\.weight=q8_0 ## First Three Dense Layers [0-2] (GPU) blk\..*\.ffn_down\.weight=q8_0 blk\..*\.ffn_(gate|up)\.weight=q8_0 ## Shared Expert (1-60) (GPU) blk\..*\.ffn_down_shexp\.weight=q8_0 blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0 ## Routed Experts (1-60) (CPU) blk\..*\.ffn_down_exps\.weight=iq6_k blk\..*\.ffn_(gate|up)_exps\.weight=iq5_k ## Token embedding and output tensors (GPU) token_embd\.weight=iq6_k output\.weight=iq6_k " custom=$( echo "$custom" | grep -v '^#' | \ sed -Ez 's:\n+:,:g;s:,$::;s:^,::' ) numactl -N 0 -m 0 \ ./build/bin/llama-quantize \ --custom-q "$custom" \ --imatrix /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/imatrix-DeepSeek-V3.1-Terminus-Q8_0.dat \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-256x20B-safetensors-BF16-00001-of-00030.gguf \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-IQ5_K.gguf \ IQ5_K \ 192 ``` </details> ## IQ4_K 382.485 GiB (4.896 BPW) Final estimate: PPL = 3.4198 +/- 0.02009 <details> <summary>👈 Secret Recipe</summary> ```bash #!/usr/bin/env bash custom=" ## Attention [0-60] (GPU) blk\..*\.attn_k_b\.weight=q8_0 blk\..*\.attn_v_b\.weight=q8_0 # Balance of attn tensors blk\..*\.attn_kv_a_mqa\.weight=q8_0 blk\..*\.attn_q_a\.weight=q8_0 blk\..*\.attn_q_b\.weight=q8_0 blk\..*\.attn_output\.weight=iq6_k ## First Three Dense Layers [0-2] (GPU) blk\.0\.ffn_down\.weight=q8_0 blk\.0\.ffn_(gate|up)\.weight=q8_0 blk\..*\.ffn_down\.weight=q8_0 blk\..*\.ffn_(gate|up)\.weight=iq6_k ## Shared Expert [3-60] (GPU) blk\..*\.ffn_down_shexp\.weight=q8_0 blk\..*\.ffn_(gate|up)_shexp\.weight=iq6_k ## Routed Experts [3-60] (CPU) blk\..*\.ffn_down_exps\.weight=iq5_k blk\..*\.ffn_(gate|up)_exps\.weight=iq4_k ## Token embedding and output tensors (GPU) token_embd\.weight=iq4_k output\.weight=iq6_k " custom=$( echo "$custom" | grep -v '^#' | \ sed -Ez 's:\n+:,:g;s:,$::;s:^,::' ) SOCKET=0 numactl -N "$SOCKET" -m "$SOCKET" \ ./build/bin/llama-quantize \ --custom-q "$custom" \ --imatrix /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/imatrix-DeepSeek-V3.1-Terminus-Q8_0.dat \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-256x20B-safetensors-BF16-00001-of-00030.gguf \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-IQ4_K.gguf \ IQ4_K \ 192 ``` </details> ## smol-IQ4_KSS 318.745 GiB (4.080 BPW) Final estimate: PPL = 3.4445 +/- 0.02022 <details> <summary>👈 Secret Recipe</summary> ```bash #!/usr/bin/env bash custom=" ## Attention [0-60] (GPU) blk\..*\.attn_k_b\.weight=q8_0 blk\..*\.attn_v_b\.weight=q8_0 # Balance of attn tensors blk\..*\.attn_kv_a_mqa\.weight=q8_0 blk\..*\.attn_q_a\.weight=q8_0 blk\..*\.attn_q_b\.weight=q8_0 blk\..*\.attn_output\.weight=iq6_k ## First Three Dense Layers [0-2] (GPU) blk\..*\.ffn_down\.weight=iq5_ks blk\..*\.ffn_(gate|up)\.weight=iq5_ks ## Shared Expert [3-60] (GPU) blk\..*\.ffn_down_shexp\.weight=iq5_ks blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks ## Routed Experts [3-60] (CPU) blk\..*\.ffn_down_exps\.weight=iq4_kss blk\..*\.ffn_(gate|up)_exps\.weight=iq4_kss ## Token embedding and output tensors (GPU) token_embd\.weight=iq4_k output\.weight=iq6_k " custom=$( echo "$custom" | grep -v '^#' | \ sed -Ez 's:\n+:,:g;s:,$::;s:^,::' ) SOCKET=1 numactl -N "$SOCKET" -m "$SOCKET" \ ./build/bin/llama-quantize \ --custom-q "$custom" \ --imatrix /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/imatrix-DeepSeek-V3.1-Terminus-Q8_0.dat \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-256x20B-safetensors-BF16-00001-of-00030.gguf \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-smol-IQ4_KSS.gguf \ IQ4_KSS \ 192 ``` </details> ## IQ3_K 290.897 GiB (3.724 BPW) Final estimate: PPL = 3.4781 +/- 0.02052 <details> <summary>👈 Secret Recipe</summary> ```bash #!/usr/bin/env bash custom=" ## Attention [0-60] (GPU) blk\..*\.attn_k_b\.weight=q8_0 blk\..*\.attn_v_b\.weight=q8_0 # Balance of attn tensors blk\..*\.attn_kv_a_mqa\.weight=q8_0 blk\..*\.attn_q_a\.weight=q8_0 blk\..*\.attn_q_b\.weight=q8_0 blk\..*\.attn_output\.weight=iq6_k ## First Three Dense Layers [0-2] (GPU) blk\.0\.ffn_down\.weight=q8_0 blk\.0\.ffn_(gate|up)\.weight=q8_0 blk\..*\.ffn_down\.weight=q8_0 blk\..*\.ffn_(gate|up)\.weight=iq6_k ## Shared Expert [3-60] (GPU) blk\..*\.ffn_down_shexp\.weight=q8_0 blk\..*\.ffn_(gate|up)_shexp\.weight=iq6_k ## Routed Experts [3-60] (CPU) blk\..*\.ffn_down_exps\.weight=iq4_kss blk\..*\.ffn_(gate|up)_exps\.weight=iq3_k ## Token embedding and output tensors (GPU) token_embd\.weight=iq4_k output\.weight=iq6_k " custom=$( echo "$custom" | grep -v '^#' | \ sed -Ez 's:\n+:,:g;s:,$::;s:^,::' ) SOCKET=1 numactl -N "$SOCKET" -m "$SOCKET" \ ./build/bin/llama-quantize \ --custom-q "$custom" \ --imatrix /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/imatrix-DeepSeek-V3.1-Terminus-Q8_0.dat \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-256x20B-safetensors-BF16-00001-of-00030.gguf \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-IQ3_K.gguf \ IQ3_K \ 192 ``` </details> ## IQ3_KS 276.958 GiB (3.545 BPW) Final estimate: PPL = 3.5174 +/- 0.02089 <details> <summary>👈 Secret Recipe</summary> ```bash #!/usr/bin/env bash custom=" ## Attention [0-60] (GPU) blk\..*\.attn_k_b\.weight=q8_0 blk\..*\.attn_v_b\.weight=q8_0 # Balance of attn tensors blk\..*\.attn_kv_a_mqa\.weight=q8_0 blk\..*\.attn_q_a\.weight=q8_0 blk\..*\.attn_q_b\.weight=iq6_k blk\..*\.attn_output\.weight=iq6_k ## First Three Dense Layers [0-2] (GPU) blk\.0\.ffn_down\.weight=iq6_k blk\.0\.ffn_(gate|up)\.weight=iq6_k blk\..*\.ffn_down\.weight=iq5_ks blk\..*\.ffn_(gate|up)\.weight=iq5_ks ## Shared Expert [3-60] (GPU) blk\..*\.ffn_down_shexp\.weight=iq5_ks blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks ## Routed Experts [3-60] (CPU) blk\..*\.ffn_down_exps\.weight=iq4_kss blk\..*\.ffn_(gate|up)_exps\.weight=iq3_ks ## Token embedding and output tensors (GPU) token_embd\.weight=iq4_k output\.weight=iq6_k " custom=$( echo "$custom" | grep -v '^#' | \ sed -Ez 's:\n+:,:g;s:,$::;s:^,::' ) SOCKET=0 numactl -N "$SOCKET" -m "$SOCKET" \ ./build/bin/llama-quantize \ --custom-q "$custom" \ --imatrix /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/imatrix-DeepSeek-V3.1-Terminus-Q8_0.dat \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-256x20B-safetensors-BF16-00001-of-00030.gguf \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-IQ3_KS.gguf \ IQ3_KS \ 192 ``` </details> ## IQ2_KL 231.356 GiB (2.962 BPW) Final estimate: PPL = 3.7112 +/- 0.02248 <details> <summary>👈 Secret Recipe</summary> ```bash #!/usr/bin/env bash custom=" ## Attention [0-60] (GPU) blk\..*\.attn_k_b\.weight=q8_0 blk\..*\.attn_v_b\.weight=q8_0 # Balance of attn tensors blk\..*\.attn_kv_a_mqa\.weight=q8_0 blk\..*\.attn_q_a\.weight=q8_0 blk\..*\.attn_q_b\.weight=q8_0 blk\..*\.attn_output\.weight=iq6_k ## First Three Dense Layers [0-2] (GPU) blk\.0\.ffn_down\.weight=q8_0 blk\.0\.ffn_(gate|up)\.weight=q8_0 blk\..*\.ffn_down\.weight=iq5_ks blk\..*\.ffn_(gate|up)\.weight=iq5_ks ## Shared Expert [3-60] (GPU) blk\..*\.ffn_down_shexp\.weight=iq5_ks blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks ## Routed Experts [3-60] (CPU) blk\..*\.ffn_down_exps\.weight=iq3_ks blk\..*\.ffn_(gate|up)_exps\.weight=iq2_kl ## Token embedding and output tensors (GPU) token_embd\.weight=iq4_k output\.weight=iq6_k " custom=$( echo "$custom" | grep -v '^#' | \ sed -Ez 's:\n+:,:g;s:,$::;s:^,::' ) SOCKET=0 numactl -N "$SOCKET" -m "$SOCKET" \ ./build/bin/llama-quantize \ --custom-q "$custom" \ --imatrix /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/imatrix-DeepSeek-V3.1-Terminus-Q8_0.dat \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-256x20B-safetensors-BF16-00001-of-00030.gguf \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-IQ2_KL.gguf \ IQ2_KL \ 192 ``` </details> ## IQ2_KS 193.144 GiB (2.472 BPW) Final estimate: PPL = 4.0280 +/- 0.02500 <details> <summary>👈 Secret Recipe</summary> ```bash #!/usr/bin/env bash custom=" ## Attention [0-60] (GPU) blk\..*\.attn_k_b\.weight=q8_0 blk\..*\.attn_v_b\.weight=q8_0 # Balance of attn tensors blk\..*\.attn_kv_a_mqa\.weight=q8_0 blk\..*\.attn_q_a\.weight=q8_0 blk\..*\.attn_q_b\.weight=q8_0 blk\..*\.attn_output\.weight=iq6_k ## First Three Dense Layers [0-2] (GPU) blk\..*\.ffn_down\.weight=iq5_ks blk\..*\.ffn_(gate|up)\.weight=iq5_ks ## Shared Expert [3-60] (GPU) blk\..*\.ffn_down_shexp\.weight=iq5_ks blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks ## Routed Experts [3-60] (CPU) blk\..*\.ffn_down_exps\.weight=iq2_kl blk\..*\.ffn_(gate|up)_exps\.weight=iq2_ks ## Token embedding and output tensors (GPU) token_embd\.weight=iq4_k output\.weight=iq6_k " custom=$( echo "$custom" | grep -v '^#' | \ sed -Ez 's:\n+:,:g;s:,$::;s:^,::' ) SOCKET=0 numactl -N "$SOCKET" -m "$SOCKET" \ ./build/bin/llama-quantize \ --custom-q "$custom" \ --imatrix /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/imatrix-DeepSeek-V3.1-Terminus-Q8_0.dat \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-256x20B-safetensors-BF16-00001-of-00030.gguf \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-IQ2_KS-main.gguf \ IQ2_KS \ 192 ``` </details> ## IQ1_KT 155.248 GiB (1.987 BPW) Final estimate: PPL = 4.5310 +/- 0.02908 <details> <summary>👈 Secret Recipe</summary> ```bash #!/usr/bin/env bash custom=" ## Attention [0-60] (GPU) blk\..*\.attn_k_b\.weight=q8_0 blk\..*\.attn_v_b\.weight=q8_0 # Balance of attn tensors blk\..*\.attn_kv_a_mqa\.weight=q8_0 blk\..*\.attn_q_a\.weight=iq6_k blk\..*\.attn_q_b\.weight=iq6_k blk\..*\.attn_output\.weight=iq5_ks ## First Three Dense Layers [0-2] (GPU) blk\.0\.ffn_down\.weight=iq6_k blk\.0\.ffn_(gate|up)\.weight=iq6_k blk\..*\.ffn_down\.weight=iq5_ks blk\..*\.ffn_(gate|up)\.weight=iq5_ks ## Shared Expert [3-60] (GPU) blk\..*\.ffn_down_shexp\.weight=iq5_ks blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks ## Routed Experts [3-60] (CPU) blk\..*\.ffn_down_exps\.weight=iq2_kt blk\..*\.ffn_(gate|up)_exps\.weight=iq1_kt ## Token embedding and output tensors (GPU) token_embd\.weight=iq4_k output\.weight=iq6_k " custom=$( echo "$custom" | grep -v '^#' | \ sed -Ez 's:\n+:,:g;s:,$::;s:^,::' ) SOCKET=0 numactl -N "$SOCKET" -m "$SOCKET" \ ./build/bin/llama-quantize \ --custom-q "$custom" \ --imatrix /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/imatrix-DeepSeek-V3.1-Terminus-Q8_0.dat \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-256x20B-safetensors-BF16-00001-of-00030.gguf \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-IQ1_KT.gguf \ IQ1_KT \ 192 ``` </details> ## IQ1_S 136.331 GiB (1.745 BPW) Final estimate: PPL = 5.4829 +/- 0.03666 <details> <summary>👈 Secret Recipe</summary> ```bash #!/usr/bin/env bash custom=" ## Attention [0-60] (GPU) blk\..*\.attn_k_b\.weight=q8_0 blk\..*\.attn_v_b\.weight=q8_0 # Balance of attn tensors blk\..*\.attn_kv_a_mqa\.weight=q8_0 blk\..*\.attn_q_a\.weight=iq6_k blk\..*\.attn_q_b\.weight=iq6_k blk\..*\.attn_output\.weight=iq6_k ## First Three Dense Layers [0-2] (GPU) blk\.0\.ffn_down\.weight=iq5_ks blk\.0\.ffn_(gate|up)\.weight=iq4_ks blk\..*\.ffn_down\.weight=iq4_kss blk\..*\.ffn_(gate|up)\.weight=iq4_kss ## Shared Expert [3-60] (GPU) blk\..*\.ffn_down_shexp\.weight=iq5_ks blk\..*\.ffn_(gate|up)_shexp\.weight=iq4_ks ## Routed Experts [3-60] (CPU) blk\..*\.ffn_down_exps\.weight=iq1_m blk\..*\.ffn_(gate|up)_exps\.weight=iq1_s ## Token embedding and output tensors (GPU) token_embd\.weight=iq4_k output\.weight=iq6_k " custom=$( echo "$custom" | grep -v '^#' | \ sed -Ez 's:\n+:,:g;s:,$::;s:^,::' ) SOCKET=0 numactl -N "$SOCKET" -m "$SOCKET" \ ./build/bin/llama-quantize \ --custom-q "$custom" \ --imatrix /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/imatrix-DeepSeek-V3.1-Terminus-Q8_0.dat \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-256x20B-safetensors-BF16-00001-of-00030.gguf \ /mnt/data/models/ubergarm/DeepSeek-V3.1-Terminus-GGUF/DeepSeek-V3.1-Terminus-IQ1_S.gguf \ IQ1_S \ 192 ``` </details> ## Quick Start ```bash # Clone and checkout $ git clone https://github.com/ikawrakow/ik_llama.cpp $ cd ik_llama.cpp # Build for hybrid CPU+CUDA $ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON -DGGML_BLAS=OFF -DGGML_SCHED_MAX_COPIES=1 $ cmake --build build --config Release -j $(nproc) # Run API server Hybrid CPU+GPU # model is the first file of the GGUF splits # remove `--no-mmap` if you don't have enough RAM and run it off of fast NVMe drive $ ./build/bin/llama-server \ --model "$model"\ --alias ubergarm/DeepSeek-V3.1-Terminus-GGUF \ --ctx-size 32768 \ -ctk q8_0 \ -fa -fmoe \ -mla 3 -amb 512 \ -ngl 99 \ -ot exps=CPU \ --parallel 1 \ --threads 8 \ --host 127.0.0.1 \ --port 8080 \ --no-display-prompt \ --no-mmap \ --jinja # Run API Server CPU-Only $ numactl -N 0 -m 0 \ ./build/bin/llama-server \ --model "$model"\ --alias ubergarm/DeepSeek-V3.1-Terminus-GGUF \ --ctx-size 131072 \ -ub 4096 -b 4096 \ -ctk q8_0 \ -fa -fmoe \ -mla 3 \ --parallel 1 \ --threads 128 \ --threads-batch 192 \ --numa numactl \ --host 127.0.0.1 \ --port 8080 \ --no-display-prompt \ --jinja \ --no-mmap # Enable/Disable Thinking? Not sure exactly, maybe something like this? # `--chat-template-kwargs '{"thinking": false }'` ``` Multi-GPU is well supported with custom `-ot ...=CUDA1` offload regex arguments etc. ## References * [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp) * [Getting Started Guide (already out of date lol)](https://github.com/ikawrakow/ik_llama.cpp/discussions/258) * [Quant Cookers Guide](https://github.com/ikawrakow/ik_llama.cpp/discussions/434) * [Compiling triton-cpu](https://github.com/triton-lang/triton-cpu/issues/237#issuecomment-2878180022) * [fp8 to bf16 safetensors casting without GPU](https://github.com/ggml-org/llama.cpp/issues/14762#issuecomment-3098571703) * [avx512 avx_vnni Zen5 experimental optimizations](https://github.com/ikawrakow/ik_llama.cpp/pull/710) * [ubergarm-imatrix-calibration-corpus-v02.txt](https://gist.github.com/ubergarm/edfeb3ff9c6ec8b49e88cdf627b0711a?permalink_comment_id=5682584#gistcomment-5682584)
corzamennav/blockassist-bc-territorial_wild_antelope_1758761055
corzamennav
2025-09-25T00:45:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-25T00:45:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
theoden8/open-r1-qwen-2.5-1.5b-r1-countdown
theoden8
2025-09-25T00:44:21Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T16:24:37Z
--- base_model: Qwen/Qwen2.5-1.5B-Instruct library_name: transformers model_name: open-r1-qwen-2.5-1.5b-r1-countdown tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for open-r1-qwen-2.5-1.5b-r1-countdown This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="theoden8/open-r1-qwen-2.5-1.5b-r1-countdown", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dweam/open-r1/runs/wh59d85m) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.48.1 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Karanjaspreet/qwen
Karanjaspreet
2025-09-25T00:42:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-09-24T22:10:36Z
--- base_model: unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Karanjaspreet - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
govadi/GSTT-v202508
govadi
2025-09-25T00:41:36Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-25T00:39:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yakshithk/t5-small-baseline
yakshithk
2025-09-25T00:40:55Z
0
0
null
[ "safetensors", "t5", "region:us" ]
null
2025-09-24T23:31:42Z
## 📌 Model Description This model is a **fine-tuned version of `google/t5-small`** trained on a custom text-to-text dataset (derived from cleaned news articles). The objective was to build a baseline encoder-decoder model that can transform raw article text into simplified outputs (summaries / cleaned text). * **Architecture**: T5-small (60M parameters) * **Task**: Text-to-Text Generation (`text2text-generation`) * **Framework**: Hugging Face Transformers (PyTorch backend) * **Training Logs**: Tracked with Weights & Biases (W\&B) --- ## 🛠 Intended Uses & Limitations ### ✅ Intended uses * Text simplification * News article summarization * As a baseline for experimenting with encoder-decoder fine-tuning ### ⚠️ Limitations * Small model size → may hallucinate or underperform on complex tasks * Not suitable for production-level summarization without further fine-tuning * Dataset size was limited (baseline experiment), so generalization may be weak --- ## 📊 Training and Evaluation ### Dataset The model was fine-tuned on a **custom dataset of news articles** scraped and cleaned in \[Week 1’s pipeline]\(link to repo). * Input: Raw article text * Output: Processed / summarized text 👉 Future versions will be trained on larger, more diverse datasets. ### Hyperparameters * learning\_rate: `5e-05` * train\_batch\_size: `8` * eval\_batch\_size: `8` * gradient\_accumulation\_steps: `2` * optimizer: `adamw_torch_fused` * num\_epochs: `20` * mixed\_precision\_training: Native AMP ### Results | Epoch | Validation Loss | | ----- | --------------- | | 1 | 9.7087 | | 5 | 4.3784 | | 10 | 2.3668 | | 15 | 1.9624 | | 20 | 1.8684 | * Final Validation Loss: **1.8684** * Qualitative evaluation: outputs were shorter, cleaner summaries compared to baseline `t5-small`. --- ## 🚀 How to Use ```python from transformers import pipeline model = "yakshithk/t5-small-baseline" summarizer = pipeline("text2text-generation", model=model) input_text = "The quick brown fox jumped over the lazy dog." output = summarizer(input_text, max_length=50, do_sample=False) print(output[0]["generated_text"]) ``` --- ## 🔍 Future Work * Fine-tune larger models (`flan-t5-base`, `flan-t5-large`) * Train on domain-specific datasets (finance, healthcare) * Deploy via Hugging Face Spaces for live demo --- ## 📚 Citation If you use this model in your research or project: ```bibtex @misc{yakshithk_t5small_baseline, author = {Yakshith K}, title = {t5-small-baseline: Fine-tuned T5-small on custom dataset}, year = {2025}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/yakshithk/t5-small-baseline}}, } ``` ---
alphaduriendur/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF
alphaduriendur
2025-09-25T00:40:36Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-25T00:40:27Z
--- license: mit library_name: transformers base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B tags: - llama-cpp - gguf-my-repo --- # alphaduriendur/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo alphaduriendur/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo alphaduriendur/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo alphaduriendur/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo alphaduriendur/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q4_k_m.gguf -c 2048 ```
nightmedia/KAT-Dev-qx86-hi-mlx
nightmedia
2025-09-25T00:39:55Z
0
0
mlx
[ "mlx", "safetensors", "qwen3", "text-generation", "conversational", "multilingual", "base_model:Kwaipilot/KAT-Dev", "base_model:quantized:Kwaipilot/KAT-Dev", "license:other", "8-bit", "region:us" ]
text-generation
2025-09-24T15:02:12Z
--- language: - multilingual license: other license_name: kwaipilot-license license_link: LICENSE library_name: mlx pipeline_tag: text-generation base_model: Kwaipilot/KAT-Dev tags: - mlx --- # KAT-Dev-qx86-hi-mlx Great Coding model, a bit slow. The qx65-hi quant is faster. Running tests on it will take a while, initial metrics show high performance This model [KAT-Dev-qx86-hi-mlx](https://huggingface.co/KAT-Dev-qx86-hi-mlx) was converted to MLX format from [Kwaipilot/KAT-Dev](https://huggingface.co/Kwaipilot/KAT-Dev) using mlx-lm version **0.28.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("KAT-Dev-qx86-hi-mlx") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
rcsb/rcsb-esm
rcsb
2025-09-25T00:37:48Z
0
0
null
[ "license:other", "region:us" ]
null
2025-09-23T22:41:25Z
--- license: other license_name: evolutionary-scale-cambrian-non-commercial-license license_link: >- https://www.evolutionaryscale.ai/policies/cambrian-non-commercial-license-agreement --- # rcsb-esm ESM-3 Model used in RCSB PDB packages. ## License This project uses the EvolutionaryScale ESM-3 model and is distributed under the [Cambrian Non-Commercial License Agreement](https://www.evolutionaryscale.ai/policies/cambrian-non-commercial-license-agreement).
thegdpranavl/EnglishWiPodTest
thegdpranavl
2025-09-25T00:37:06Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:unsloth/Qwen3-8B-Base-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-8B-Base-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-25T00:36:46Z
--- base_model: unsloth/Qwen3-8B-Base-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** thegdpranavl - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-8B-Base-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
wikeeyang/Magic-Wan-Image-v1.0
wikeeyang
2025-09-25T00:35:57Z
2,040
12
diffusers
[ "diffusers", "text-to-image", "zh", "en", "base_model:Wan-AI/Wan2.2-T2V-A14B", "base_model:finetune:Wan-AI/Wan2.2-T2V-A14B", "license:apache-2.0", "region:us" ]
text-to-image
2025-09-05T02:12:41Z
--- license: apache-2.0 language: - zh - en base_model: - Wan-AI/Wan2.2-T2V-A14B pipeline_tag: text-to-image library_name: diffusers --- ## Magic-Wan-Image V1.0 本模型是一个实验模型,是 Wan2.2-T2V-14B 文生视频模型的混调版本,目的是能让广大 Wan 2.2 模型的爱好者,能像使用 Flux 一样,简单方便的使用 Wan2.2 T2V 模型来生成各种图片。Wan 2.2 模型擅长写实类图像的生成,同时兼顾多种风格,由于是视频模型演化而来,模型在生图能力的泛化性方面稍微弱一点。本模型最大的兼顾了模型的写实能力和风格变化,同时尽可能的体现更多的细节,基本上达到与 Flux.1-Dev 模型相当的创作力和表现力。模型的混调方法是将 Wan2.2-T2V-14B 模型的 High-Noise 和 Low-Noise 两部分分层进行不同权重比例的混合,再简单微调而成。目前是一个实验模型,可能还存在一些不足之处,欢迎大家试用并反馈信息,以便在未来版本改进。 This model is an experimental model. A mixed and finetuned version of the Wan2.2-T2V-14B text-to-video model, Let many enthusiasts of the Wan 2.2 model to easily use the Wan2.2 T2V model to generate various images, similar to use the Flux model. The Wan 2.2 model excels at generating realistic images while also accommodating various styles. However, since it evolved from a video model, its generative capabilities for raw images are slightly weaker. This model balances the realistic capabilities and style variations while striving to include more details, essentially achieving creativity and expressiveness comparable to the Flux.1-Dev model. The mixing method used for this model involves layering the High-Noise and Low-Noise parts of the Wan2.2-T2V-14B model and blending them with different weight ratios, followed by simple fine-tuning. Currently, it is an experimental model that may still have some shortcomings, and we welcome everyone to try it out and provide feedback for improvements in future versions. ### Also on: https://civitai.com/models/1927692, https://www.modelscope.cn/models/wikeeyang/Magic-Wan-Image ### GGUF Version: Pls refer to https://huggingface.co/befox/Magic-Wan-Image-v1.0-GGUF ### 使用方法: 请参见例图工作流,ComfyUI_example_workflow_00001.png, ComfyUI_example_workflow_00002.png, ComfyUI_example_workflow_00003.png model shift: 1.0 - 8.0,您可以自己尝试; model cfg: 1.0 -4.0,您可以自己尝试; inference steps: 20 -50, 您可以自己尝试; sample / schduler: deis/simple 或 euler/beta,或任意组合,您可以自己尝试。 <p align="center"> <img src="ComfyUI_example_workflow_00001.png" width="1200"/> <img src="ComfyUI_example_workflow_00003.png" width="1200"/> <p> ## License Agreement Wan2.2-T2V-A14B is licensed under Apache 2.0.
Gibsonradd/Comics
Gibsonradd
2025-09-25T00:34:05Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2025-09-25T00:34:05Z
--- license: artistic-2.0 ---
aoi-ot/VibeVoice-7B
aoi-ot
2025-09-25T00:32:41Z
5,881
22
vibevoice
[ "vibevoice", "safetensors", "Podcast", "text-to-speech", "en", "zh", "arxiv:2508.19205", "arxiv:2412.08635", "license:mit", "region:us" ]
text-to-speech
2025-09-04T05:16:39Z
--- license: mit language: - en - zh pipeline_tag: text-to-speech tags: - Podcast library_name: vibevoice --- ## VibeVoice: A Frontier Open-Source Text-to-Speech Model > This repository contains a copy of model weights obtained from ModelScope([microsoft/VibeVoice-7B](https://www.modelscope.cn/models/microsoft/VibeVoice-7B)). > The license for this model is the `MIT License`, **which permits redistribution**. > > My understanding of the MIT License, which is consistent with the broader open-source community's consensus, > is that it grants the right to distribute copies of the software and its derivatives. > Therefore, I am lawfully exercising the right to redistribute this model. > > If you are a rights holder and believe this understanding of the license is incorrect, please submit a DMCA complaint to Hugging Face at [email protected]_ VibeVoice is a novel framework designed for generating expressive, long-form, multi-speaker conversational audio, such as podcasts, from text. It addresses significant challenges in traditional Text-to-Speech (TTS) systems, particularly in scalability, speaker consistency, and natural turn-taking. A core innovation of VibeVoice is its use of continuous speech tokenizers (Acoustic and Semantic) operating at an ultra-low frame rate of 7.5 Hz. These tokenizers efficiently preserve audio fidelity while significantly boosting computational efficiency for processing long sequences. VibeVoice employs a next-token diffusion framework, leveraging a Large Language Model (LLM) to understand textual context and dialogue flow, and a diffusion head to generate high-fidelity acoustic details. The model can synthesize speech up to **90 minutes** long with up to **4 distinct speakers**, surpassing the typical 1-2 speaker limits of many prior models. ➡️ **Technical Report:** [VibeVoice Technical Report](https://arxiv.org/abs/2508.19205) ➡️ **Project Page:** [microsoft/VibeVoice](https://microsoft.github.io/VibeVoice) ➡️ **Code:** [microsoft/VibeVoice-Code](https://github.com/microsoft/VibeVoice) <p align="left"> <img src="figures/Fig1.png" alt="VibeVoice Overview" height="250px"> </p> ## Training Details Transformer-based Large Language Model (LLM) integrated with specialized acoustic and semantic tokenizers and a diffusion-based decoding head. - LLM: Qwen2.5 for this release. - Tokenizers: - Acoustic Tokenizer: Based on a σ-VAE variant (proposed in [LatentLM](https://arxiv.org/pdf/2412.08635)), with a mirror-symmetric encoder-decoder structure featuring 7 stages of modified Transformer blocks. Achieves 3200x downsampling from 24kHz input. Encoder/decoder components are ~340M parameters each. - Semantic Tokenizer: Encoder mirrors the Acoustic Tokenizer's architecture (without VAE components). Trained with an ASR proxy task. - Diffusion Head: Lightweight module (4 layers, ~600M parameters) conditioned on LLM hidden states. Predicts acoustic VAE features using a Denoising Diffusion Probabilistic Models (DDPM) process. Uses Classifier-Free Guidance (CFG) and DPM-Solver (and variants) during inference. - Context Length: Trained with a curriculum increasing up to 32,768 tokens. - Training Stages: - Tokenizer Pre-training: Acoustic and Semantic tokenizers are pre-trained separately. - VibeVoice Training: Pre-trained tokenizers are frozen; only the LLM and diffusion head parameters are trained. A curriculum learning strategy is used for input sequence length (4k -> 16K -> 32K). Text tokenizer not explicitly specified, but the LLM (Qwen2.5) typically uses its own. Audio is "tokenized" via the acoustic and semantic tokenizers. ## Models | Model | Context Length | Generation Length | Weight | |-------|----------------|----------|----------| | VibeVoice-0.5B-Streaming | - | - | On the way | | VibeVoice-1.5B | 64K | ~90 min | [HF link](https://huggingface.co/microsoft/VibeVoice-1.5B) | | VibeVoice-Large| 32K | ~45 min | You are here. | ## Installation and Usage Please refer to [GitHub README](https://github.com/microsoft/VibeVoice?tab=readme-ov-file#installation) ## Responsible Usage ### Direct intended uses The VibeVoice model is limited to research purpose use exploring highly realistic audio dialogue generation detailed in the [tech report](https://arxiv.org/pdf/2508.19205). ### Out-of-scope uses Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by MIT License. Use to generate any text transcript. Furthermore, this release is not intended or licensed for any of the following scenarios: - Voice impersonation without explicit, recorded consent – cloning a real individual’s voice for satire, advertising, ransom, social‑engineering, or authentication bypass. - Disinformation or impersonation – creating audio presented as genuine recordings of real people or events. - Real‑time or low‑latency voice conversion – telephone or video‑conference “live deep‑fake” applications. - Unsupported language – the model is trained only on English and Chinese data; outputs in other languages are unsupported and may be unintelligible or offensive. - Generation of background ambience, Foley, or music – VibeVoice is speech‑only and will not produce coherent non‑speech audio. ## Risks and limitations While efforts have been made to optimize it through various techniques, it may still produce outputs that are unexpected, biased, or inaccurate. VibeVoice inherits any biases, errors, or omissions produced by its base model. Potential for Deepfakes and Disinformation: High-quality synthetic speech can be misused to create convincing fake audio content for impersonation, fraud, or spreading disinformation. Users must ensure transcripts are reliable, check content accuracy, and avoid using generated content in misleading ways. Users are expected to use the generated content and to deploy the models in a lawful manner, in full compliance with all applicable laws and regulations in the relevant jurisdictions. It is best practice to disclose the use of AI when sharing AI-generated content. English and Chinese only: Transcripts in language other than English or Chinese may result in unexpected audio outputs. Non-Speech Audio: The model focuses solely on speech synthesis and does not handle background noise, music, or other sound effects. Overlapping Speech: The current model does not explicitly model or generate overlapping speech segments in conversations. ## Recommendations We do not recommend using VibeVoice in commercial or real-world applications without further testing and development. This model is intended for research and development purposes only. Please use responsibly. To mitigate the risks of misuse, we have: Embedded an audible disclaimer (e.g. “This segment was generated by AI”) automatically into every synthesized audio file. Added an imperceptible watermark to generated audio so third parties can verify VibeVoice provenance. Please see contact information at the end of this model card. Logged inference requests (hashed) for abuse pattern detection and publishing aggregated statistics quarterly. Users are responsible for sourcing their datasets legally and ethically. This may include securing appropriate rights and/or anonymizing data prior to use with VibeVoice. Users are reminded to be mindful of data privacy concerns. ## Contact This project was conducted by members of Microsoft Research. We welcome feedback and collaboration from our audience. If you have suggestions, questions, or observe unexpected/offensive behavior in our technology, please contact us at [email protected]. If the team receives reports of undesired behavior or identifies issues independently, we will update this repository with appropriate mitigations.
Ahmed107/whisper-small-speaker-asr-final
Ahmed107
2025-09-25T00:29:15Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-25T00:02:30Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer model-index: - name: whisper-small-speaker-asr-final results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-speaker-asr-final This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 5 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.56.2 - Pytorch 2.8.0+cu126 - Datasets 4.1.1 - Tokenizers 0.22.0 # infrance ```python # ========================================================== # STEP 0: INSTALL LIBRARIES # # ========================================================== # print("Installing necessary libraries...") # !pip install -q --upgrade transformers datasets accelerate torch torchaudio soundfile librosa import torch import librosa import numpy as np import os from transformers import WhisperProcessor, WhisperForConditionalGeneration # # --- (Optional) Mount Google Drive --- # try: # from google.colab import drive # drive.mount('/content/drive') # print("Google Drive mounted successfully.") # except ImportError: # print("Not in a Google Colab environment. Skipping Google Drive mount.") # ========================================================== # STEP 1: DEFINE THE FINAL, ROBUST, PRODUCTION-READY MODEL CLASS # ========================================================== class WhisperForSpeakerASR(WhisperForConditionalGeneration): def __init__(self, config): super().__init__(config) self.diarization_head = torch.nn.Linear(config.d_model, 2) # ### THE FINAL, DEFINITIVE FIX - PART 1 ### # Create a temporary storage to capture the logits during generation self.latest_diarization_logits = None def forward( self, input_features=None, labels=None, diarization_labels=None, **kwargs, ): # We need to get the encoder_hidden_states. if "encoder_outputs" in kwargs and kwargs["encoder_outputs"] is not None: encoder_hidden_states = kwargs["encoder_outputs"][0] else: if input_features is None: raise ValueError("input_features must be provided when encoder_outputs is not.") encoder_outputs = self.model.encoder( input_features=input_features, output_hidden_states=True ) encoder_hidden_states = encoder_outputs.last_hidden_state # Run the diarization head on the encoder states diarization_logits = self.diarization_head(encoder_hidden_states) # ### THE FINAL, DEFINITIVE FIX - PART 2 ### # Store the logits so we can access them after calling .generate() self.latest_diarization_logits = diarization_logits # We now call the ORIGINAL forward pass of the base Whisper model. kwargs["encoder_outputs"] = (encoder_hidden_states,) outputs = super().forward( input_features=input_features, labels=labels, **kwargs, ) # Attach our custom outputs to the result for consistency (optional but good practice) outputs.diarization_logits = diarization_logits # Handle custom loss during training if diarization_labels is not None: loss_fct = torch.nn.CrossEntropyLoss() reshaped_logits = diarization_logits.view(-1, 2) reshaped_labels = diarization_labels.view(-1) diarization_loss = loss_fct(reshaped_logits, reshaped_labels) outputs.loss = diarization_loss if outputs.loss is None else outputs.loss + diarization_loss return outputs # ========================================================== # STEP 2: SETUP MODEL AND INFERENCE FUNCTION # ========================================================== MODEL_HUB_PATH = "Ahmed107/whisper-small-speaker-asr-final" def test_and_format_from_path(audio_path): print("="*60) print("🚀 Starting Final Inference Test...") print("="*60) if not os.path.exists(audio_path): print(f"❌ ERROR: Audio file not found: {audio_path}") return print(f"Loading model from: {MODEL_HUB_PATH}") device = "cuda" if torch.cuda.is_available() else "cpu" processor = WhisperProcessor.from_pretrained(MODEL_HUB_PATH) model = WhisperForSpeakerASR.from_pretrained(MODEL_HUB_PATH).to(device).eval() print(f"Loading audio file: {audio_path}") audio_array, _ = librosa.load(audio_path, sr=16000) print("Audio loaded successfully.") chunk_duration_s = 30 samples_per_chunk = chunk_duration_s * 16000 time_per_token = chunk_duration_s / 1500 speaker_map = {0: "SPEAKER_A", 1: "SPEAKER_B"} print("Running model for transcription and diarization...") full_transcription = "" diarization_timeline = [] with torch.no_grad(): for i in range(0, len(audio_array), samples_per_chunk): chunk_start_time = i / 16000 chunk_audio = audio_array[i : i + samples_per_chunk] if len(chunk_audio) < samples_per_chunk: chunk_audio = np.pad(chunk_audio, (0, samples_per_chunk - len(chunk_audio))) inputs = processor(chunk_audio, return_tensors="pt", sampling_rate=16000) input_features = inputs.input_features.to(device) # --- Transcription will now work correctly --- generated_ids = model.generate(input_features, language="en") transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] full_transcription += transcription + " " # ### THE FINAL, DEFINITIVE FIX - PART 3 ### # Instead of calling the model again, retrieve the stored logits predicted_ids = torch.argmax(model.latest_diarization_logits, dim=-1).squeeze(0).cpu() # Process timeline logic (remains the same) current_speaker_id, segment_start_token = -1, 0 for token_idx, speaker_id in enumerate(predicted_ids): speaker_id = speaker_id.item() if speaker_id != current_speaker_id: if current_speaker_id != -1: end_time = chunk_start_time + (token_idx * time_per_token) start_time = chunk_start_time + (segment_start_token * time_per_token) diarization_timeline.append({"speaker": speaker_map.get(current_speaker_id, "UNKNOWN"), "start": start_time, "end": end_time}) current_speaker_id, segment_start_token = speaker_id, token_idx if current_speaker_id != -1: end_time = chunk_start_time + (len(predicted_ids) * time_per_token) start_time = chunk_start_time + (segment_start_token * time_per_token) diarization_timeline.append({"speaker": speaker_map.get(current_speaker_id, "UNKNOWN"), "start": start_time, "end": end_time}) # Merge adjacent segments logic (remains the same) merged_timeline = [] if diarization_timeline: merged_timeline.append(diarization_timeline[0]) for segment in diarization_timeline[1:]: if segment["speaker"] == merged_timeline[-1]["speaker"] and segment["start"] - merged_timeline[-1]["end"] < 0.1: merged_timeline[-1]["end"] = segment["end"] else: merged_timeline.append(segment) # Final output for README print("\n\n" + "="*25 + " COPY BELOW THIS LINE " + "="*25) print("\n### Example Output\n") print("```text") print("Full Transcription:") print(full_transcription.strip()) print("\nDiarization Timeline:") for segment in merged_timeline: if segment['start'] < segment['end']: print(f"[{segment['start']:0>6.2f}s - {segment['end']:0>6.2f}s] {segment['speaker']}") print("```") print("\n" + "="*26 + " COPY ABOVE THIS LINE " + "="*26 + "\n") # ========================================================== # STEP 3: SPECIFY YOUR FILE PATH AND RUN THE TEST # ========================================================== # --- ⬇️⬇️⬇️ EDIT THIS LINE ⬇️⬇️⬇️ --- AUDIO_FILE_PATH = "/content/audio.wav" # --- ⬆️⬆️⬆️ EDIT THIS LINE ⬆️⬆️⬆️ --- # Run the function test_and_format_from_path(AUDIO_FILE_PATH) ```
adamkay397/lora_llama323binst_v4.0
adamkay397
2025-09-25T00:26:42Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-24T02:00:34Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** adamkay397 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ChenWu98/qwen_2.5_0.5b_sft_type_split_1
ChenWu98
2025-09-25T00:22:25Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-0.5B", "base_model:finetune:Qwen/Qwen2.5-0.5B", "endpoints_compatible", "region:us" ]
null
2025-09-06T15:58:45Z
--- base_model: Qwen/Qwen2.5-0.5B library_name: transformers model_name: qwen_2.5_0.5b_sft_type_split_1 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen_2.5_0.5b_sft_type_split_1 This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/jrczic24) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
NikolayKozloff/LFM2-2.6B-Q8_0-GGUF
NikolayKozloff
2025-09-25T00:20:51Z
0
1
transformers
[ "transformers", "gguf", "liquid", "lfm2", "edge", "llama-cpp", "gguf-my-repo", "text-generation", "en", "ar", "zh", "fr", "de", "ja", "ko", "es", "base_model:LiquidAI/LFM2-2.6B", "base_model:quantized:LiquidAI/LFM2-2.6B", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2025-09-25T00:20:38Z
--- library_name: transformers license: other license_name: lfm1.0 license_link: LICENSE language: - en - ar - zh - fr - de - ja - ko - es pipeline_tag: text-generation tags: - liquid - lfm2 - edge - llama-cpp - gguf-my-repo base_model: LiquidAI/LFM2-2.6B --- # NikolayKozloff/LFM2-2.6B-Q8_0-GGUF This model was converted to GGUF format from [`LiquidAI/LFM2-2.6B`](https://huggingface.co/LiquidAI/LFM2-2.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/LiquidAI/LFM2-2.6B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/LFM2-2.6B-Q8_0-GGUF --hf-file lfm2-2.6b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/LFM2-2.6B-Q8_0-GGUF --hf-file lfm2-2.6b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/LFM2-2.6B-Q8_0-GGUF --hf-file lfm2-2.6b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/LFM2-2.6B-Q8_0-GGUF --hf-file lfm2-2.6b-q8_0.gguf -c 2048 ```
alphaduriendur/CodeLlama-7b-Python-hf-Q4_K_M-GGUF
alphaduriendur
2025-09-25T00:20:05Z
0
0
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-2", "llama-cpp", "gguf-my-repo", "text-generation", "code", "base_model:meta-llama/CodeLlama-7b-Python-hf", "base_model:quantized:meta-llama/CodeLlama-7b-Python-hf", "license:llama2", "endpoints_compatible", "region:us" ]
text-generation
2025-09-25T00:19:45Z
--- extra_gated_heading: You need to share contact information with Meta to access this model extra_gated_prompt: "### LLAMA 2 COMMUNITY LICENSE AGREEMENT\n\"Agreement\" means\ \ the terms and conditions for use, reproduction, distribution and modification\ \ of the Llama Materials set forth herein. \"Documentation\" means the specifications,\ \ manuals and documentation accompanying Llama 2 distributed by Meta at https://ai.meta.com/resources/models-and-libraries/llama-downloads/.\ \ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity's behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf. \"Llama 2\"\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/. \"Llama\ \ Materials\" means, collectively, Meta's proprietary Llama 2 and documentation\ \ (and any portion thereof) made available under this Agreement. \"Meta\" or \"\ we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\ \ an entity, your principal place of business is in the EEA or Switzerland) and\ \ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). \ \ By clicking \"I Accept\" below or by using or distributing any portion or element\ \ of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights\ \ and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide,\ \ non- transferable and royalty-free limited license under Meta's intellectual property\ \ or other rights owned by Meta embodied in the Llama Materials to use, reproduce,\ \ distribute, copy, create derivative works of, and make modifications to the Llama\ \ Materials. \n \nb. Redistribution and Use. i. If you distribute or make\ \ the Llama Materials, or any derivative works thereof, available to a third party,\ \ you shall provide a copy of this Agreement to such third party. ii. If you\ \ receive Llama Materials, or any derivative works thereof, from a Licensee as\ \ part of an integrated end user product, then Section 2 of this Agreement will\ \ not apply to you. iii. You must retain in all copies of the Llama Materials that\ \ you distribute the following attribution notice within a \"Notice\" text file\ \ distributed as a part of such copies: \"Llama 2 is licensed under the LLAMA 2\ \ Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.\"\ \ iv. Your use of the Llama Materials must comply with applicable laws and regulations\ \ (including trade compliance laws and regulations) and adhere to the Acceptable\ \ Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy),\ \ which is hereby incorporated by reference into this Agreement. v. You will not\ \ use the Llama Materials or any output or results of the Llama Materials to improve\ \ any other large language model (excluding Llama 2 or derivative works thereof).\ \ 2. Additional Commercial Terms. If, on the Llama 2 version release date, the\ \ monthly active users of the products or services made available by or for Licensee,\ \ or Licensee's affiliates, is greater than 700 million monthly active users in\ \ the preceding calendar month, you must request a license from Meta, which Meta\ \ may grant to you in its sole discretion, and you are not authorized to exercise\ \ any of the rights under this Agreement unless or until Meta otherwise expressly\ \ grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE\ \ LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON\ \ AN \"AS IS\" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED,\ \ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\ \ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\ \ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\ \ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND\ \ RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE\ \ LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE,\ \ PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST\ \ PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE\ \ DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY\ \ OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are\ \ granted under this Agreement, and in connection with the Llama Materials, neither\ \ Meta nor Licensee may use any name or mark owned by or associated with the other\ \ or any of its affiliates, except as required for reasonable and customary use\ \ in describing and redistributing the Llama Materials. b. Subject to Meta's ownership\ \ of Llama Materials and derivatives made by or for Meta, with respect to any derivative\ \ works and modifications of the Llama Materials that are made by you, as between\ \ you and Meta, you are and will be the owner of such derivative works and modifications.\ \ c. If you institute litigation or other proceedings against Meta or any entity\ \ (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama\ \ Materials or Llama 2 outputs or results, or any portion of any of the foregoing,\ \ constitutes infringement of intellectual property or other rights owned or licensable\ \ by you, then any licenses granted to you under this Agreement shall terminate\ \ as of the date such litigation or claim is filed or instituted. You will indemnify\ \ and hold harmless Meta from and against any claim by any third party arising\ \ out of or related to your use or distribution of the Llama Materials. 6. Term\ \ and Termination. The term of this Agreement will commence upon your acceptance\ \ of this Agreement or access to the Llama Materials and will continue in full\ \ force and effect until terminated in accordance with the terms and conditions\ \ herein. Meta may terminate this Agreement if you are in breach of any term or\ \ condition of this Agreement. Upon termination of this Agreement, you shall delete\ \ and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the \ \ termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement\ \ will be governed and construed under the laws of the State of California without\ \ regard to choice of law principles, and the UN Convention on Contracts for the\ \ International Sale of Goods does not apply to this Agreement. The courts of California\ \ shall have exclusive jurisdiction of any dispute arising out of this Agreement.\ \ USE POLICY ### Llama 2 Acceptable Use Policy Meta is committed to promoting safe\ \ and fair use of its tools and features, including Llama 2. If you access or use\ \ Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy\ \ of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).\ \ #### Prohibited Uses We want everyone to use Llama 2 safely and responsibly. You\ \ agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law\ \ or others’ rights, including to:\n 1. Engage in, promote, generate, contribute\ \ to, encourage, plan, incite, or further illegal or unlawful activity or content,\ \ such as: \n 1. Violence or terrorism \n 2. Exploitation or harm to children,\ \ including the solicitation, creation, acquisition, or dissemination of child exploitative\ \ content or failure to report Child Sexual Abuse Material\n 3. Human trafficking,\ \ exploitation, and sexual violence\n 4. The illegal distribution of information\ \ or materials to minors, including obscene materials, or failure to employ legally\ \ required age-gating in connection with such information or materials.\n 5.\ \ Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote,\ \ incite, or facilitate the harassment, abuse, threatening, or bullying of individuals\ \ or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination\ \ or other unlawful or harmful conduct in the provision of employment, employment\ \ benefits, credit, housing, other economic benefits, or other essential goods and\ \ services\n 4. Engage in the unauthorized or unlicensed practice of any profession\ \ including, but not limited to, financial, legal, medical/health, or related professional\ \ practices \n 5. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 6. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama 2 Materials\n 7. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system \n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Llama 2 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal\ \ drugs and regulated/controlled substances\n 4. Operation of critical infrastructure,\ \ transportation technologies, or heavy machinery\n 5. Self-harm or harm to others,\ \ including suicide, cutting, and eating disorders\n 6. Any content intended to\ \ incite or promote violence, abuse, or any infliction of bodily harm to an individual\n\ 3. Intentionally deceive or mislead others, including use of Llama 2 related to\ \ the following:\n 1. Generating, promoting, or furthering fraud or the creation\ \ or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory\ \ content, including the creation of defamatory statements, images, or other content\n\ \ 3. Generating, promoting, or further distributing spam\n 4. Impersonating another\ \ individual without consent, authorization, or legal right\n 5. Representing that\ \ the use of Llama 2 or outputs are human-generated\n 6. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagement \n 4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system \nPlease report any violation of this Policy, software “bug,”\ \ or other problems that could lead to a violation of this Policy through one of\ \ the following means: * Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)\ \ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\ \ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\ \ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama:\ \ [[email protected]](mailto:[email protected])" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit language: - code pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 - llama-cpp - gguf-my-repo license: llama2 base_model: meta-llama/CodeLlama-7b-Python-hf --- # alphaduriendur/CodeLlama-7b-Python-hf-Q4_K_M-GGUF This model was converted to GGUF format from [`meta-llama/CodeLlama-7b-Python-hf`](https://huggingface.co/meta-llama/CodeLlama-7b-Python-hf) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/CodeLlama-7b-Python-hf) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo alphaduriendur/CodeLlama-7b-Python-hf-Q4_K_M-GGUF --hf-file codellama-7b-python-hf-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo alphaduriendur/CodeLlama-7b-Python-hf-Q4_K_M-GGUF --hf-file codellama-7b-python-hf-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo alphaduriendur/CodeLlama-7b-Python-hf-Q4_K_M-GGUF --hf-file codellama-7b-python-hf-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo alphaduriendur/CodeLlama-7b-Python-hf-Q4_K_M-GGUF --hf-file codellama-7b-python-hf-q4_k_m.gguf -c 2048 ```
sapie-model/SQL-checkpoint-8000
sapie-model
2025-09-25T00:18:53Z
0
0
transformers
[ "transformers", "safetensors", "lora", "peft", "adapters", "ko", "base_model:OpenPipe/gemma-3-27b-it-text-only", "base_model:adapter:OpenPipe/gemma-3-27b-it-text-only", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-25T00:18:31Z
--- library_name: transformers license: apache-2.0 language: [ko] tags: [lora, peft, adapters] task_categories: [text-generation] base_model: OpenPipe/gemma-3-27b-it-text-only --- # sapie-model/SQL-checkpoint-8000 - 이 리포는 **LoRA/어댑터 가중치**만 포함합니다. 추론 시에는 베이스 모델 `OpenPipe/gemma-3-27b-it-text-only` 과 함께 로드하세요. ## 사용 예시 ```python from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel base = "OpenPipe/gemma-3-27b-it-text-only" model_id = "sapie-model/SQL-checkpoint-8000" # 이 어댑터 리포 tok = AutoTokenizer.from_pretrained(base) model = AutoModelForCausalLM.from_pretrained(base, torch_dtype='auto', device_map='auto') model = PeftModel.from_pretrained(model, model_id) ```
bn22/CLIP-ViT-B-32-APL
bn22
2025-09-25T00:14:59Z
0
0
transformers
[ "transformers", "safetensors", "clip", "zero-shot-image-classification", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2025-09-24T19:25:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Samyn/Qwen3-0.6B-Gensyn-Swarm-climbing_hunting_raccoon
Samyn
2025-09-25T00:14:11Z
226
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am climbing_hunting_raccoon", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-03T11:05:50Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am climbing_hunting_raccoon --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
EnriqueSolarte/SpaceThinker-Qwen2.5VL-3B-00004-VqCaAuuoeWk_0
EnriqueSolarte
2025-09-25T00:12:46Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B", "base_model:finetune:UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B", "endpoints_compatible", "region:us" ]
null
2025-09-10T08:01:11Z
--- base_model: UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B library_name: transformers model_name: SpaceThinker-Qwen2.5VL-3B-00004-VqCaAuuoeWk_0 tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for SpaceThinker-Qwen2.5VL-3B-00004-VqCaAuuoeWk_0 This model is a fine-tuned version of [UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B](https://huggingface.co/UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="EnriqueSolarte/SpaceThinker-Qwen2.5VL-3B-00004-VqCaAuuoeWk_0", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.22.1 - Transformers: 4.56.0 - Pytorch: 2.8.0 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
khazarai/Math-RL
khazarai
2025-09-25T00:10:12Z
11
1
peft
[ "peft", "safetensors", "math", "trl", "unsloth", "grpo", "transformers", "text-generation", "conversational", "en", "dataset:HoangHa/pensez-grpo", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct", "license:mit", "region:us" ]
text-generation
2025-09-11T15:17:27Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: peft license: mit datasets: - HoangHa/pensez-grpo language: - en pipeline_tag: text-generation tags: - math - trl - unsloth - grpo - transformers --- # Model Card for Math-RL ## Model Details This model is a fine-tuned version of Qwen2.5-0.5B-Instruct, optimized with Group Relative Policy Optimization (GRPO) on a curated math dataset of 700 problems. The fine-tuning process aims to enhance the model’s step-by-step reasoning ability in mathematical problem solving, improving its performance on structured reasoning tasks. ### Model Description - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model:** Qwen2.5-0.5B-Instruct - **Fine-tuning Method**: GRPO with LoRa - **Domain**: Mathematics (problem-solving, reasoning) - **Dataset Size**: ~700 examples ## Uses ### Direct Use The model is intended for: - Educational purposes: assisting students with math problems - Research on small-scale RLHF-style fine-tuning (GRPO) - Experiments in reasoning with small instruction-tuned models - Serving as a lightweight math reasoning assistant in constrained environments ## Bias, Risks, and Limitations - Small Dataset: Fine-tuned only on 700 math problems, so generalization is limited. - Reasoning Errors: May produce incorrect or hallucinated answers. Always verify results. - Not a Math Oracle: Should not be used in high-stakes scenarios (e.g., exams, grading, critical calculations). - Limited Scope: Performance is strongest on problems similar to the fine-tuning dataset; outside domains may degrade. - Language: While the base model supports multiple languages, math-specific fine-tuning was primarily English-based. ## How to Get Started with the Model Use the code below to get started with the model. ```python from huggingface_hub import login from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel login(token="") tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen2.5-0.5B-Instruct",) base_model = AutoModelForCausalLM.from_pretrained( "unsloth/Qwen2.5-0.5B-Instruct", device_map={"": 0}, token="" ) model = PeftModel.from_pretrained(base_model,"khazarai/Math-RL") question = """ Translate the graph of the function $y=\sin 2x$ along the $x$-axis to the left by $\dfrac{\pi }{6}$ units, and stretch the ordinate to twice its original length (the abscissa remains unchanged) to obtain the graph of the function $y=f(x)$. If the minimum value of the function $y=f(x)+a$ on the interval $\left[ 0,\dfrac{\pi }{2} \right]$ is $\sqrt{3}$, then $a=\boxed{\_\_\_\_\_}$. """ system = """ Respond in the following format: <reasoning> ... </reasoning> <answer> ... </answer> """ messages = [ {"role" : "system", "content" : system}, {"role" : "user", "content" : question} ] text = tokenizer.apply_chat_template( messages, tokenize = False, ) from transformers import TextStreamer _ = model.generate( **tokenizer(text, return_tensors = "pt").to("cuda"), max_new_tokens = 2048, streamer = TextStreamer(tokenizer, skip_prompt = True), ) ``` **For pipeline:** ```python from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer from peft import PeftModel tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen2.5-0.5B-Instruct") base_model = AutoModelForCausalLM.from_pretrained("unsloth/Qwen2.5-0.5B-Instruct") model = PeftModel.from_pretrained(base_model, "khazarai/Math-RL") question = """ Translate the graph of the function $y=\sin 2x$ along the $x$-axis to the left by $\dfrac{\pi }{6}$ units, and stretch the ordinate to twice its original length (the abscissa remains unchanged) to obtain the graph of the function $y=f(x)$. If the minimum value of the function $y=f(x)+a$ on the interval $\left[ 0,\dfrac{\pi }{2} \right]$ is $\sqrt{3}$, then $a=\boxed{\_\_\_\_\_}$. """ system = """ Respond in the following format: <reasoning> ... </reasoning> <answer> ... </answer> """ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) messages = [ {"role" : "system", "content" : system}, {"role": "user", "content": question} ] pipe(messages) ``` ### Framework versions - PEFT 0.15.2
juanmvs/neurona
juanmvs
2025-09-25T00:08:42Z
16
0
peft
[ "peft", "safetensors", "llama", "llama-3.3", "lora", "qlora", "conversational", "spanish", "workplace-safety", "violence-prevention", "chat", "instruction-tuning", "es", "dataset:bertin-project/alpaca-spanish", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:adapter:meta-llama/Llama-3.3-70B-Instruct", "license:llama3.3", "region:us" ]
null
2025-09-23T17:34:32Z
--- language: es license: llama3.3 library_name: peft tags: - llama - llama-3.3 - peft - lora - qlora - conversational - spanish - workplace-safety - violence-prevention - chat - instruction-tuning base_model: meta-llama/Llama-3.3-70B-Instruct datasets: - bertin-project/alpaca-spanish model-index: - name: neurona results: [] --- # Neurona - a spanish workplace violence prevention and sexual harassment support model **Neurona** is a specialized fine-tuned version of Meta's `Llama-3.3-70B-Instruct` model, designed for Spanish-language conversations about workplace violence prevention and sexual harassment support. This PEFT (LoRA) adapter provides empathetic, professional, and informative responses to users seeking guidance and support in workplace safety situations. Fine-tuned using QLoRA on NVIDIA H100 GPU with a curated dataset of workplace violence prevention conversations. The repo with the finetuning scripts can be found [here](https://github.com/juanmvsa/llama3-3-70b-finetuning?tab=readme-ov-file). ## Model Details - **Model Type:** PEFT LoRA Adapter - **Base Model:** `meta-llama/Llama-3.3-70B-Instruct` - **Fine-tuning Method:** QLoRA (4-bit Quantized Low-Rank Adaptation) - **Language:** Spanish (es) - **Domain:** Workplace safety, violence prevention, and sexual harassment support - **License:** Llama 3.3 Community License - **Parameters:** LoRA adapter (~150M trainable parameters) ## Intended Use This model is intended to be used as a conversational AI assistant to provide: - Educational information about workplace violence and harassment. - Guidance on reporting procedures and seeking help. - Empathetic support for individuals in difficult workplace situations. ### Out-of-Scope Use This model is **not** a substitute for professional legal, psychological, or crisis intervention services. It should not be used for: - Providing legal advice. - Medical or psychological diagnosis. - Emergency or crisis situations. ## How to Use ### Requirements ```bash pip install transformers torch peft bitsandbytes accelerate ``` ### Basic Usage This is a PEFT LoRA adapter that must be loaded on top of the base Llama 3.3 70B Instruct model: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel # Load base model and tokenizer base_model_id = "meta-llama/Llama-3.3-70B-Instruct" adapter_model_id = "juan/llama-33-70b-workplace-safety-es" tokenizer = AutoTokenizer.from_pretrained(base_model_id) base_model = AutoModelForCausalLM.from_pretrained( base_model_id, torch_dtype=torch.bfloat16, device_map="auto", load_in_4bit=True # Enable 4-bit quantization for memory efficiency ) # Load PEFT adapter model = PeftModel.from_pretrained(base_model, adapter_model_id) # Specialized system prompt for workplace violence prevention system_prompt = """Eres un asistente especializado en prevención de violencia laboral y acoso sexual en el entorno de trabajo. Tu objetivo es proporcionar apoyo empático, información precisa y recursos específicos a personas que puedan estar experimentando situaciones difíciles en su lugar de trabajo. IMPORTANTE: Siempre mantén un tono profesional pero cálido, valida las emociones del usuario, y proporciona información práctica basada en protocolos establecidos.""" # Example conversation messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": "Creo que estoy sufriendo acoso laboral, ¿qué puedo hacer?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) outputs = model.generate( input_ids, max_new_tokens=512, eos_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Memory Requirements | Configuration | GPU Memory | RAM | Storage | |---------------|------------|-----|---------| | **4-bit quantized** | 8GB+ VRAM | 16GB+ | 20GB+ | | **Full precision** | 40GB+ VRAM | 64GB+ | 150GB+ | ### Hardware Recommendations - **Recommended:** RTX 4090, A100, H100 (with 4-bit quantization) - **Minimum:** RTX 3090, V100 (with 4-bit quantization) - **CPU inference:** Possible but very slow (32GB+ RAM required) ### Inference Script This repository includes a comprehensive inference script (`inference.py`) that supports: - Interactive model selection between base Llama 3.3 70B and Neurona - Side-by-side comparison of model responses - Single inference mode and interactive chat mode - Automatic quantization and memory optimization Usage examples: ```bash # Interactive model selection python inference.py --interactive --token your_hf_token # Direct comparison mode python inference.py --interactive --single --prompt "¿Qué hacer ante acoso laboral?" --token your_hf_token # Neurona model only python inference.py --model meta-llama/Llama-3.3-70B-Instruct --token your_hf_token ``` ## Training Data - **Training Set:** A custom dataset of 48 Spanish instruction-response pairs focused on workplace violence prevention. - **Validation Set:** 1000 samples from the `bertin-project/alpaca-spanish` dataset to ensure general conversational quality. The training data was carefully curated to include empathetic, professional, and relevant responses for the target domain. ## Training Procedure ### Fine-tuning with QLoRA The model was fine-tuned using 4-bit NormalFloat (NF4) quantization and LoRA. - **LoRA `r`:** 128 - **LoRA `alpha`:** 32 - **LoRA `dropout`:** 0.05 - **Target Modules:** `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj`, `embed_tokens`, `lm_head` ### Hyperparameters - **Learning Rate:** 1e-4 - **Scheduler:** Cosine - **Epochs:** 3 - **Per-Device Batch Size:** 1 (optimized for H100) - **Gradient Accumulation Steps:** 32 (effective batch size: 32) - **Warmup Steps:** 100 - **Weight Decay:** 0.01 - **Gradient Clipping:** 0.5 ### Hardware and Software - **GPU:** NVIDIA H100 PCIe (79.6GB effective memory) - **Software:** PyTorch 2.4.0, TRL, PEFT, bitsandbytes, accelerate ## Evaluation ### Training Metrics | Metric | Value | |---|---| | **Training Loss** | 1.7418 | | **Mean Token Accuracy** | 63.63% | | **Entropy** | 1.1294 | | **Training Runtime** | 224 seconds (3.73 minutes) | | **Total FLOPs** | 2.33 × 10¹⁶ | | **Total Tokens Processed** | 54,621 | | **Samples per Second** | 0.429 | | **Global Steps** | 3 | ### Conversation Quality A multi-dimensional evaluation framework was used to assess conversation quality, with a composite score of **0.73** (target > 0.65). | Metric | Score | |---|---| | **Empathy Score** | 0.67 | | **Domain Relevance** | 0.81 | | **Professional Tone** | 0.74 | ## Limitations & Ethical Considerations ### Model Limitations - **Domain Specificity:** Optimized for Spanish workplace violence prevention; may not perform well on general tasks - **Data Coverage:** Based on 32 training examples; may not cover all workplace situation nuances - **Cultural Context:** Designed for Spanish-speaking workplace environments - **Response Length:** Optimized for conversational responses, not long-form content ### Ethical Guidelines - **Not Professional Services:** This model provides educational information only, not legal or psychological advice - **Crisis Situations:** For immediate danger, contact emergency services (112 in Spain, 911 in US) - **Privacy:** Users should not share sensitive personal information - **Bias Awareness:** Responses may reflect biases present in training data - **Human Oversight:** Recommend human review for critical workplace decisions ### Safety Considerations - **Emergency Situations:** Always prioritize professional emergency services - **Legal Matters:** Consult qualified employment lawyers for legal advice - **Mental Health:** Seek licensed mental health professionals for psychological support - **Workplace Policies:** Follow your organization's specific HR protocols ## Citation If you use this model in your research or applications, please cite it as: ```bibtex @misc{neurona-2025, author = {Juan MVS}, title = {Neurona: Spanish Workplace Violence Prevention Chatbot}, year = {2025}, publisher = {Hugging Face}, journal = {Hugging Face Hub}, howpublished = {\url{https://huggingface.co/juanmvs/neurona}} } ``` ## Acknowledgments - **Base Model:** Meta AI for Llama 3.3 70B Instruct - **Framework:** Hugging Face Transformers and PEFT libraries - **Training Infrastructure:** NVIDIA H100 GPU - **Validation Dataset:** Bertin Project for Spanish Alpaca dataset ## Project Structure This is a complete finetuning project that includes: - **Training Script:** `finetune_llama33_70b.py` - Comprehensive QLoRA training pipeline - **Inference Script:** `inference.py` - Interactive inference and model comparison - **Upload Script:** `upload_to_hf.py` - HuggingFace model upload utility - **Configuration:** `pyproject.toml` - Complete dependency and project configuration - **Training Data:** `ft_data.json` - 48 curated Spanish workplace safety conversations ### Key Dependencies - PyTorch 2.4.0 with CUDA 12.1 support - Transformers ≥4.45.0 for Llama 3.3 compatibility - PEFT ≥0.12.0 for LoRA implementation - TRL ≥0.11.0 for supervised fine-tuning - BitsAndBytes ≥0.43.0 for 4-bit quantization - Weights & Biases for experiment tracking ## Contact For questions about this model or collaboration opportunities: - **email:** [email protected] - **Model Repository:** [juanmvs/neurona](https://huggingface.co/juanmvs/neurona) --- **⚠️ Disclaimer:** This AI model is for educational and informational purposes only. For workplace violence situations requiring immediate intervention, please contact appropriate emergency services, HR departments, or professional counselors.
niedamsie/gotosolopicmakerlustynewsmalldsetprodigyoverfit
niedamsie
2025-09-25T00:07:57Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-23T21:43:28Z
--- license: apache-2.0 ---
Anuj-HF/Qwen2.5-VL-3B-Instruct-Thinking
Anuj-HF
2025-09-25T00:07:49Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "grpo", "trl", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-VL-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-09-24T23:49:51Z
--- base_model: Qwen/Qwen2.5-VL-3B-Instruct library_name: transformers model_name: Qwen2.5-VL-3B-Instruct-Thinking tags: - generated_from_trainer - grpo - trl licence: license --- # Model Card for Qwen2.5-VL-3B-Instruct-Thinking This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Anuj-HF/Qwen2.5-VL-3B-Instruct-Thinking", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.24.0.dev0 - Transformers: 4.56.1 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite GRPO as: ```bibtex @article{shao2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
khazarai/Personal-Finance-R1
khazarai
2025-09-25T00:07:46Z
79
5
peft
[ "peft", "safetensors", "finance", "transformers", "unsloth", "trl", "text-generation", "conversational", "en", "dataset:Akhil-Theerthala/PersonalFinance_v2", "base_model:unsloth/Qwen3-1.7B", "base_model:adapter:unsloth/Qwen3-1.7B", "license:mit", "region:us" ]
text-generation
2025-09-07T00:43:39Z
--- base_model: unsloth/Qwen3-1.7B library_name: peft license: mit datasets: - Akhil-Theerthala/PersonalFinance_v2 language: - en pipeline_tag: text-generation tags: - finance - transformers - unsloth - trl --- ## Model Details This model is fine-tuned for instruction-following in the domain of personal finance, with a focus on: - Budgeting advice - Investment strategies - Credit management - Retirement planning - Insurance and financial planning concepts - Personalized financial reasoning ### Model Description - **License:** MIT - **Finetuned from model:** unsloth/Qwen3-1.7B - **Dataset:** The model was fine-tuned on the PersonalFinance_v2 dataset, curated and published by Akhil-Theerthala. ### Model Capabilities - Understands and provides contextual financial advice based on user queries. - Responds in a chat-like conversational format. - Trained to follow multi-turn instructions and deliver clear, structured, and accurate financial reasoning. - Generalizes well to novel personal finance questions and explanations. ## Uses ### Direct Use - Chatbots for personal finance - Educational assistants for financial literacy - Decision support for simple financial planning - Interactive personal finance Q&A systems ## Bias, Risks, and Limitations - Not a substitute for licensed financial advisors. - The model's advice is based on training data and may not reflect region-specific laws, regulations, or financial products. - May occasionally hallucinate or give generic responses in ambiguous scenarios. - Assumes user input is well-formed and relevant to personal finance. ## How to Get Started with the Model Use the code below to get started with the model. ```python from huggingface_hub import login from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel login(token="") tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-1.7B",) base_model = AutoModelForCausalLM.from_pretrained( "unsloth/Qwen3-1.7B", device_map={"": 0}, token="" ) model = PeftModel.from_pretrained(base_model,"khazarai/Personal-Finance-R1") question = """ $19k for a coding bootcamp Hi! I was just accepted into the full-time software engineering program with Flatiron and have approx. $0 to my name. I know I can get a loan with either Climb or accent with around 6.50% interest, is this a good option? I would theoretically be paying near $600/month. I really enjoy coding and would love to start a career in tech but the potential $19k price tag is pretty scary. Any advice? """ messages = [ {"role" : "user", "content" : question} ] text = tokenizer.apply_chat_template( messages, tokenize = False, add_generation_prompt = True, enable_thinking = True, ) from transformers import TextStreamer _ = model.generate( **tokenizer(text, return_tensors = "pt").to("cuda"), max_new_tokens = 2048, temperature = 0.6, top_p = 0.95, top_k = 20, streamer = TextStreamer(tokenizer, skip_prompt = True), ) ``` **For pipeline:** ```python from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer from peft import PeftModel tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-1.7B") base_model = AutoModelForCausalLM.from_pretrained("unsloth/Qwen3-1.7B") model = PeftModel.from_pretrained(base_model, "khazarai/Personal-Finance-R1") question = """ $19k for a coding bootcamp Hi! I was just accepted into the full-time software engineering program with Flatiron and have approx. $0 to my name. I know I can get a loan with either Climb or accent with around 6.50% interest, is this a good option? I would theoretically be paying near $600/month. I really enjoy coding and would love to start a career in tech but the potential $19k price tag is pretty scary. Any advice? """ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) messages = [ {"role": "user", "content": question} ] pipe(messages) ``` ## Training Details ### Training Data - Dataset Overview: PersonalFinance_v2 is a collection of high-quality instruction-response pairs focused on personal finance topics. It covers a wide range of subjects including budgeting, saving, investing, credit management, retirement planning, insurance, and financial literacy. - Data Format: The dataset consists of conversational-style prompts paired with detailed and well-structured responses. It is formatted to enable instruction-following language models to understand and generate coherent financial advice and reasoning. ### Framework versions - PEFT 0.14.0
corzamennav/blockassist-bc-territorial_wild_antelope_1758758590
corzamennav
2025-09-25T00:04:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-25T00:04:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
JesusAura999/TinySenikV2
JesusAura999
2025-09-25T00:04:24Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3_text", "en", "base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-24T23:54:32Z
--- base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** JesusAura999 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
andrewdalpino/NoPE-GPT-400M-Base
andrewdalpino
2025-09-25T00:03:37Z
19
0
null
[ "tensorboard", "safetensors", "NoPE", "GPT", "text-generation", "dataset:HuggingFaceFW/fineweb", "license:apache-2.0", "region:us" ]
text-generation
2025-08-23T01:50:41Z
--- tags: - NoPE - GPT license: apache-2.0 datasets: - HuggingFaceFW/fineweb metrics: - perplexity pipeline_tag: text-generation --- # NoPE GPT NoPE GPT is a generative pretrained Transformer-style (GPT) language model with no positional embeddings (NoPE). Built using [PyTorch](https://pytorch.org/) and trained on HuggingFace's [Fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), [SmolTalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk), and [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) datasets, NoPE GPT can answer questions, summarize documents, use tools, and more. ## Features - **No positional embeddings (NoPE)**: NoPE GPT aims to be a more parsimonious model by completely removing positional embeddings from the architecture allowing the context length to vary without complex model surgery. Despite having no positional embeddings, NoPE GPT performs better at context length generalization than the best relative embeddings (ALiBi, RoPE, T5) offering good performance even when operating within 2X the trained context window. - **Fast and memory-efficient**: NoPE GPT employs a number of training and inference-time optimizations such as Group Query Attention, KV-caching, quantization-aware fine-tuning (QAT), activation checkpointing, and fully-sharded data parallel (FSDP) pretraining. As such, you can train and infer on relatively modest hardware. - **Fully Open-source**: Unlike closed-source LLMs, NoPE GPT provides both the model weights *and* the source code to train, fine-tune, export, and generate text from the model using your own hardware. ## Pretrained Models | Name | Context Length | Vocab. Size | Embedding Dim. | Query Heads | Key/Value Heads | Layers | |---|---|---|---|---|---|---| | [NoPE-GPT-400M-Chat](https://huggingface.co/andrewdalpino/NoPE-GPT-400M-Chat) | 8192 | 50,261 | 1280 | 20 | 5 | 20 | | [NoPE-GPT-400M-Base](https://huggingface.co/andrewdalpino/NoPE-GPT-400M-Base) | 8192 | 50,257 | 1280 | 20 | 5 | 20 | ## Installation The code required to run inference comes as a Python package that you can install with your favorite package manager such as [pip](https://pypi.org/project/pip/). ```sh pip install nope-gpt ``` ## Pretrained Examples This first example we'll show how to load a pretrained base model from HuggingFace Hub and then use it to generate text. First, make sure the `nope-gpt` package is installed into your project. Once the package is installed you can load pretrained weights from HuggingFace Hub like in the example below. ```python from nope_gpt.model import NoPEGPT from nope_gpt.tokenization import BaseTokenizer model_name = "andrewdalpino/NoPE-GPT-400M-Base" model = NoPEGPT.from_pretrained(model_name) tokenizer = BaseTokenizer.from_pretrained(model_name) ``` Then, to generate text, provide a prompt, tokenize it, and iterate through the `generate()` method until the model outputs a stop token. ```python import torch prompt = input("Enter a prompt: ") prompt = tokenizer.tokenize(prompt) prompt = torch.tensor(prompt, dtype=torch.int64) for token, probability in model.generate(prompt): if token.item() in tokenizer.stop_tokens: break out = tokenizer.decode_single_token(token) print(out, end="", flush=True) ``` Generating text from the base model is the simplest way to get started with model inference, however, it is not the most useful when it comes to being able to chat with and guide the model output. In this example we'll load one of the pretrained chat models from HuggingFace Hub and then chat with it. In addition, we'll make use of short-term memory so the model can remember the chat history. First, load a pretrained chat model from HuggingFace Hub like in the example below. ```python from nope_gpt.model import NoPEGPT from nope_gpt.tokenization import ChatTokenizer model_name = "andrewdalpino/NoPE-GPT-400M-Chat" model = NoPEGPT.from_pretrained(model_name) tokenizer = ChatTokenizer.from_pretrained(model_name) ``` Then, we'll define a partial function that will generate tokens with a set of default parameters such as `max_tokens`, `context_length`, and `temperature`. ```python from functools import partial generate = partial( model.generate, max_tokens=2000, context_length=8192, temperature=0.7, top_k=500 top_p=0.9, repeat_penalty=0.1, repeat_window=50, ) ``` Next, we'll instantiate a `BufferWindowMemory` object to handle the chat history and craft a system message that will guide generation. Note that messages are inputted as dicts with `role` and `content` keys. For a system message use the `system` role. ```python from nope_gpt.memory import BufferWindowMemory memory = BufferWindowMemory(4) system_message = { "role": "system", "content": "You are a friendly AI assistant.", } ``` Finally, prompt the user for input, adds the system message and chat history to the context, tokenizes the messages, and then generates the `assistant` response. ```python import torch while True: prompt = input("Enter a prompt: ") user_message = { "role": "user", "content": prompt, } memory.add_message(user_message) messages = [system_message] + memory.get_history() tokens = tokenizer.tokenize_prompt(messages) prompt = torch.tensor(tokens, dtype=torch.int64, device=args.device) response = "" for token, probability in generate(prompt): if token.item() in tokenizer.stop_tokens: break out = tokenizer.decode_single_token(token) print(out, end="", flush=True) response += out print("\n") assistant_message = { "role": "assistant", "content": response, } memory.add_message(assistant_message) ``` You're done! For more advanced usages take a look at the `generate.py` and `chat.py` scripts located in the code repository. ## Training and Fine-tuning In addition to the inference code, we also provide training and fine-tuning code so you can build your own NoPE GPT models. Before getting started, take a look at the `model_sizing.ipynb` IPython notebook in the project repo for a guide to sizing your model based on the amount of memory and compute you have available. ### Clone the project repo We'll need the code from the project repository to train and/or fine-tune the model. ``` git clone https://github.com/andrewdalpino/NoPE-GPT ``` ### Install Project Dependencies Project dependencies are specified in the `requirements.txt` file. You can install them with [pip](https://pip.pypa.io/en/stable/) using the following command from the project root. We recommend using a virtual environment such as `venv` to keep package dependencies on your system tidy. ``` python -m venv ./.venv source ./.venv/bin/activate pip install -r requirements.txt ``` ### Pretraining Pretraining focuses on building a foundation of language and general knowledge to use as a base for future supervised fine-tuning. The training objective is to predict the next token in a sample of text. It is a self-supervised form of training because the model learns from masked inputs of unsupervised data. For the pretraining corpus we use the Fineweb dataset which consists of 15T high-quality tokens gathered from the worldwide web. In addition, the dataset has been split into 3 subsets (10BT, 100BT, and 350BT versions) for training smaller models. ``` python pretrain.py ``` **Note** that it will take a while to download and pre-process the dataset the first time that the training script is run. To customize the default architecture you can adjust the `embedding_dimensions`, attention heads, `num_hidden_layers`, and `feed_forward_ratio` arguments of the pretraining script. ``` python pretrain.py --embedding_dimensions=4096 --num_q_heads=64 --num_kv_heads=16 --num_hidden_layers=48 --feed_forward_ratio=4 ``` You can also adjust the `batch_size`, `learning_rate`, and `gradient_accumulation_steps` to suite your training setup. ``` python pretrain.py --batch_size=32 --learning_rate=0.01 --gradient_accumulation_steps=128 ``` If you are planning a long training run, it is recommended to set a random seed. This will ensure that any random state is preserved if the process gets interrupted. ``` python pretrain.py --seed=42 ``` For distributed training, use PyTorch's [torchrun](https://pytorch.org/docs/stable/elastic/run.html) extension to launch a distributed data parallel (DDP) session. The example below is for executing the training script on a single node with 8 individual GPUs. ``` torchrun --standalone --nnodes=1 --nproc-per-node=8 pretrain.py --batch_size=16 --gradient_accumulation_steps=128 ``` **Note** that when training in data-parallel mode it's important that the `gradient_accumulation_steps` divides evenly into the world size for maximum performance. For example, if we have an 8 GPU cluster, we could perform 32 gradient accumulation steps in exactly 4 passes over the network. ### Pretraining Arguments | Argument | Default | Type | Description | |---|---|---|---| | --dataset_subset | "sample-10BT" | str | The subset of the Fineweb dataset to train on. Options are `sample-10BT`, `sample-100BT`, and `sample-350BT`. Set to `None` to train on the full 15T token dataset. | | --token_encoding | "r50k_base" | str | The Tiktoken encoding scheme to use when tokenizing the dataset. Options include `r50k_base`, `p50k_base`, `cl100k_base`, and `o200k_base`. | | --dataset_path | "./datasets" | str | The path to the preprocessed dataset files on disk. | | --batch_size | 2 | int | The number of samples of size `tokens_per_sample` to pass through the network at a time. | | --gradient_accumulation_steps | 128 | int | The number of batches to pass through the network before updating the model weights. | | --tokens_per_sample | 4096 | int | The number of tokens to pack into a single training sequence. This is sometimes called the block size or context length. | | --max_steps | 10000 | int | The maximum number of steps to take for pretraining. | | --learning_rate | 1e-2 | float | The learning rate of the Adafactor optimizer. | | --low_memory_optimizer | False | bool | Should the optimizer reduce its memory consumption in exchange for a slightly slower runtime? | | --max_gradient_norm | 10.0 | float | Clip gradients above this threshold norm before stepping. | | --embedding_dimensions | 1024 | int | The dimensionality of the token embeddings. | | --num_q_heads | 16 | int | The number of query heads within every attention layer. | | --num_kv_heads | 4 | int | The number of key and value heads within every attention layer. | | --num_hidden_layers | 16 | int | The number of attention/MLP blocks within the body of the network. | | --feed_forward_ratio | 4 | (1, 2, 4) | The ratio of hidden neurons to embedding dimensions in the MLP layers of the network. | | --dropout | 0.0 | float | The proportion of signals to send to zero during training as regularization. | | --activation_checkpointing | False | bool | Should we use activation checkpointing? This will drastically reduce memory utilization during training at the cost of recomputing the forward pass. | | --ddp_sharding_level | 2 | int | The level of sharding to use for DDP training. Options are 2 or 3 for partial and full sharding respectively, or 0 for no sharding. | | --eval_interval | 100 | int | Evaluate the model after this many epochs on the testing set. | | --num_eval_samples | 2048 | int | The number of hold-out samples to use for validation during training. | | --checkpoint_interval | 100 | int | Save the model checkpoint to disk every this many epochs. | | --checkpoint_path | "./checkpoints/checkpoint.pt" | str | The path to the base checkpoint file on disk. | | --resume | False | bool | Should we resume training from the last checkpoint? | | --run_dir_path | "./runs" | str | The path to the TensorBoard run directory for this training session. | | --device | "cpu" | str | The device to run the training on ex `cuda`, `cuda:0`, `mps`, `cpu`. | | --seed | None | int | The seed for the random number generator. | ### Fine-tuning Instruction-tuning is a supervised training technique focused on developing specialized objectives such as chatting, text summarization, chain-of-thought, and prompt rewriting. We use the SmolTalk and UltraFeedback datasets by HuggingFace as fine-tuning corpora because they include a broad range of training objectives such as conversation, instruction following, summarization, and human preference alignment. ``` python fine-tune.py ``` To pick which dataset subsets to train on you can specify them in a comma-separated list like in the example below. ``` python fine-tune.py --dataset_subsets=smol-magpie-ultra,smol-summarize,ultra-feedback ``` You can also adjust the `batch_size`, `learning_rate`, and `gradient_accumulation_steps` just like we did with pre-training. ``` python fine-tune.py --batch_size=32 --learning_rate=0.01 --gradient_accumulation_steps=32 ``` To adjust the number of trainable LoRA parameters as well as the strength of the LoRA and Dropout signals you can change the `--rank` and `--alpha` arguments respectively. ``` python fine-tune.py --rank=4 --alpha=2.0 ``` To quantize the base model weights during fine-tuning (QLoRA) you can specify the `quantize_base_weights` argument and to adjust the quant group size set the `quant_group_size` argument like in the example below. ``` python fine-tune.py --quantize_base_weights --quant_group_size=128 ``` In memory constrained environments, you can enable activation checkpointing to trade off compute for memory efficiency by recomputing the activations of each decoder block during backpropagation. ``` python fine-tune.py --activation_checkpointing ``` ### Fine-tuning Arguments | Argument | Default | Type | Description | |---|---|---|---| | --base_checkpoint_path | None | string | The path to the base model checkpoint on disk. | | --dataset_subset | "all" | str | A comma-separated list of subsets of the dataset to train on. Options are `all`, `apigen-80k`, `everyday-conversations`, `explore-instruct-rewriting`, `longalign`, `metamathqa-50k`, `numina-cot-100k`, `openhermes-100k`, `self-oss-instruct`, `smol-constraints`, `smol-magpie-ultra`, `smol-rewrite`, `smol-summarize`, `systemchats-30k`, and `ultra-feedback`. | | --max_tokens_per_sample | 4096 | int | The maximum number of tokens to pack into a single training sequence. | | --filter_long_samples | False | bool | Should we filter out samples that are longer than the max_tokens_per_sample? | | --num_dataset_processes | 8 | int | The number of processes to use for processing the dataset. | | --batch_size | 1 | int | The number of samples to pass through the network at a time. | | --gradient_accumulation_steps | 128 | int | The number of batches to pass through the network before updating the weights. | | --num_epochs | 2 | int | The number of epochs to train for. | | --learning_rate | 1e-2 | float | The learning rate of the Adafactor optimizer. | | --low_memory_optimizer | False | bool | Should the optimizer reduce its memory consumption in exchange for a slightly slower runtime? | | --max_gradient_norm | 1.0 | float | Clip gradients above this threshold norm before stepping. | | --rank | 8 | int | The rank of the LoRA decomposition matrices. | | --alpha | 1.0 | float | The strength of the LoRA signal. | | --freeze_token_embeddings | False | bool | Should we freeze the weights of the token embeddings? | | --activation_checkpointing | False | bool | Should we use activation checkpointing? This will reduce drastically memory utilization during training at the cost of needing to recompute the forward pass. | | --eval_interval | 1 | int | Evaluate the model after this many epochs on the testing set. | | --num_eval_samples | 2048 | int | The number of hold-out samples to use for validation during training. | | --checkpoint_interval | 1 | int | Save the model parameters to disk every this many epochs. | | --checkpoint_path | "./checkpoints/checkpoint.pt" | str | The path to the model checkpoint. | | --resume | False | bool | Should we resume training from the last checkpoint? | | --run_dir_path | "./runs" | str | The path to the TensorBoard run directory for this training session. | | --device | "cpu" | str | The device to run the training on ex `cuda`, `cuda:0`, `mps`, `cpu`. | | --seed | None | int | The seed for the random number generator. | ### Training Dashboard We use [TensorBoard](https://www.tensorflow.org/tensorboard) to capture and display pretraining events such as loss and gradient norm updates. To launch the dashboard server run the following command from the terminal. ``` tensorboard --logdir=./runs ``` Then navigate to the dashboard using your favorite web browser. ## References: >- G. Penedo, et al. The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale, 38th Conference on Neural Information Processing Systems (NeurIPS 2024) Track on Datasets and Benchmarks. >- L. B. Allal, et al. SmolLM2 - with great data, comes great performance, 2024. >- A. Radford, et al. Language Models are Unsupervised Multitask Learners, OpenAI, 2019. >- T. Brown, et al. Language Models are Few-Shot Learners. OpenAI, 2020. >- A. Kazemnejad, et al. The Impact of Positional Encoding on Length Generalization in Transformers, 37th Conference on Neural Information Processing Systems (NeurIPS 2023). >- S. Rajbhandari, et al. ZeRO: Memory Optimizations Toward Training Trillion Parameter Models, 2020. >- J. R. Hermans, et al. Accumulated Gradient Normalization, JMLR: Workshop and Conference Proceedings, 2017. >- T. Chen, et al. Training Deep Nets with Sublinear Memory Cost. MIT, 2019. >- B. Zhang, et al. Root Mean Square Layer Normalization. 33rd Conference on Neural Information Processing Systems, NeurIPS 2019. >- J. Kaplan, et al. Scaling Laws for Neural Language Models, OpenAI, 2020. >- J. Hoffman, et al. Training Compute-Optimal Large Language Models, Deep Mind, 2022. >- J. Ainslie, et al. GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints, Google Research, 2023.
khazarai/Fino1-4B
khazarai
2025-09-25T00:02:57Z
19
1
peft
[ "peft", "safetensors", "finance", "trl", "unsloth", "sft", "transformers", "text-generation", "conversational", "en", "dataset:TheFinAI/Fino1_Reasoning_Path_FinQA", "arxiv:2109.00122", "arxiv:2502.08127", "base_model:unsloth/Qwen3-4B", "base_model:adapter:unsloth/Qwen3-4B", "license:mit", "region:us" ]
text-generation
2025-09-09T00:38:29Z
--- base_model: unsloth/Qwen3-4B library_name: peft license: mit datasets: - TheFinAI/Fino1_Reasoning_Path_FinQA language: - en pipeline_tag: text-generation tags: - finance - trl - unsloth - sft - transformers --- # Model Card for Model ID ## Model Details Fino1-4B is a fine-tuned version of Qwen3-4B adapted for financial reasoning and question answering. The model was trained with LoRA parameter-efficient fine-tuning on a curated dataset derived from FinQA and enriched with GPT-4o-generated reasoning paths, enabling more structured and explainable answers to financial questions. ### Model Description - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model:** Qwen3-4B - **Domain:** Finance ## Uses ### Direct Use - Primary use case: Answering financial reasoning questions with step-by-step structured reasoning. - Example applications: - Financial report analysis - Numerical reasoning over tabular data - Structured financial Q&A for educational/research purposes ## Bias, Risks, and Limitations - Not a financial advisor; outputs should not be considered professional financial advice. - May hallucinate numbers or misinterpret complex financial contexts. - Trained on ~5.5K examples; may lack coverage for niche financial instruments or international standards. ## How to Get Started with the Model ```python from huggingface_hub import login from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel login(token="") tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-4B",) base_model = AutoModelForCausalLM.from_pretrained( "unsloth/Qwen3-4B", device_map={"": 0}, token="" ) model = PeftModel.from_pretrained(base_model,"khazarai/Fino1-4B") question = """ Please answer the given financial question based on the context. Context: note 9 2014 benefit plans the company has defined benefit pension plans covering certain employees in the united states and certain international locations . postretirement healthcare and life insurance benefits provided to qualifying domestic retirees as well as other postretirement benefit plans in international countries are not material . the measurement date used for the company 2019s employee benefit plans is september 30 . effective january 1 , 2018 , the legacy u.s . pension plan was frozen to limit the participation of employees who are hired or re-hired by the company , or who transfer employment to the company , on or after january 1 , net pension cost for the years ended september 30 included the following components: . |( millions of dollars )|pension plans 2019|pension plans 2018|pension plans 2017| |service cost|$ 134|$ 136|$ 110| |interest cost|107|90|61| |expected return on plan assets|( 180 )|( 154 )|( 112 )| |amortization of prior service credit|( 13 )|( 13 )|( 14 )| |amortization of loss|78|78|92| |settlements|10|2|2014| |net pension cost|$ 135|$ 137|$ 138| |net pension cost included in the preceding table that is attributable to international plans|$ 32|$ 34|$ 43| net pension cost included in the preceding table that is attributable to international plans $ 32 $ 34 $ 43 the amounts provided above for amortization of prior service credit and amortization of loss represent the reclassifications of prior service credits and net actuarial losses that were recognized in accumulated other comprehensive income ( loss ) in prior periods . the settlement losses recorded in 2019 and 2018 primarily included lump sum benefit payments associated with the company 2019s u.s . supplemental pension plan . the company recognizes pension settlements when payments from the supplemental plan exceed the sum of service and interest cost components of net periodic pension cost associated with this plan for the fiscal year . as further discussed in note 2 , upon adopting an accounting standard update on october 1 , 2018 , all components of the company 2019s net periodic pension and postretirement benefit costs , aside from service cost , are recorded to other income ( expense ) , net on its consolidated statements of income , for all periods presented . notes to consolidated financial statements 2014 ( continued ) becton , dickinson and company . Question: what is the percentage increase in service costs from 2017 to 2018? Answer: """ messages = [ {"role" : "user", "content" : question} ] text = tokenizer.apply_chat_template( messages, tokenize = False, add_generation_prompt = True, enable_thinking = True, ) from transformers import TextStreamer _ = model.generate( **tokenizer(text, return_tensors = "pt").to("cuda"), max_new_tokens = 4000, temperature = 0.6, top_p = 0.95, top_k = 20, streamer = TextStreamer(tokenizer, skip_prompt = True), ) ``` **For pipeline:** ```python from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer from peft import PeftModel tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-4B") base_model = AutoModelForCausalLM.from_pretrained("unsloth/Qwen3-4B") model = PeftModel.from_pretrained(base_model, "khazarai/Fino1-4B") question = """ Please answer the given financial question based on the context. Context: note 9 2014 benefit plans the company has defined benefit pension plans covering certain employees in the united states and certain international locations . postretirement healthcare and life insurance benefits provided to qualifying domestic retirees as well as other postretirement benefit plans in international countries are not material . the measurement date used for the company 2019s employee benefit plans is september 30 . effective january 1 , 2018 , the legacy u.s . pension plan was frozen to limit the participation of employees who are hired or re-hired by the company , or who transfer employment to the company , on or after january 1 , net pension cost for the years ended september 30 included the following components: . |( millions of dollars )|pension plans 2019|pension plans 2018|pension plans 2017| |service cost|$ 134|$ 136|$ 110| |interest cost|107|90|61| |expected return on plan assets|( 180 )|( 154 )|( 112 )| |amortization of prior service credit|( 13 )|( 13 )|( 14 )| |amortization of loss|78|78|92| |settlements|10|2|2014| |net pension cost|$ 135|$ 137|$ 138| |net pension cost included in the preceding table that is attributable to international plans|$ 32|$ 34|$ 43| net pension cost included in the preceding table that is attributable to international plans $ 32 $ 34 $ 43 the amounts provided above for amortization of prior service credit and amortization of loss represent the reclassifications of prior service credits and net actuarial losses that were recognized in accumulated other comprehensive income ( loss ) in prior periods . the settlement losses recorded in 2019 and 2018 primarily included lump sum benefit payments associated with the company 2019s u.s . supplemental pension plan . the company recognizes pension settlements when payments from the supplemental plan exceed the sum of service and interest cost components of net periodic pension cost associated with this plan for the fiscal year . as further discussed in note 2 , upon adopting an accounting standard update on october 1 , 2018 , all components of the company 2019s net periodic pension and postretirement benefit costs , aside from service cost , are recorded to other income ( expense ) , net on its consolidated statements of income , for all periods presented . notes to consolidated financial statements 2014 ( continued ) becton , dickinson and company . Question: what is the percentage increase in service costs from 2017 to 2018? Answer: """ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) messages = [ {"role": "user", "content": question} ] pipe(messages) ``` ## Training Data ### Citation ```bibtex @article{chen2021finqa, title={Finqa: A dataset of numerical reasoning over financial data}, author={Chen, Zhiyu and Chen, Wenhu and Smiley, Charese and Shah, Sameena and Borova, Iana and Langdon, Dylan and Moussa, Reema and Beane, Matt and Huang, Ting-Hao and Routledge, Bryan and others}, journal={arXiv preprint arXiv:2109.00122}, year={2021} @article{qian2025fino1, title={Fino1: On the Transferability of Reasoning Enhanced LLMs to Finance}, author={Qian, Lingfei and Zhou, Weipeng and Wang, Yan and Peng, Xueqing and Huang, Jimin and Xie, Qianqian}, journal={arXiv preprint arXiv:2502.08127}, year={2025} } ``` ### Framework versions - PEFT 0.14.0
model-hub-nd/Wan2.2-I2V-A14B-Light21-Det-Diffusers
model-hub-nd
2025-09-25T00:02:32Z
20
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "diffusers:WanImageToVideoPipeline", "region:us" ]
null
2025-09-22T21:10:49Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pranavnaik98/tiago-open-door_act_policy_gazebo_50
pranavnaik98
2025-09-25T00:02:16Z
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:pranavnaik98/tiago-gazebo-act-50", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-09-25T00:01:35Z
--- datasets: pranavnaik98/tiago-gazebo-act-50 library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - lerobot - act - robotics --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
alphaduriendur/Llama-3.2-1B-Instruct-Q4_K_M-GGUF
alphaduriendur
2025-09-25T00:01:19Z
0
0
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-1B-Instruct", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-09-25T00:01:11Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo license: llama3.2 extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \n“Licensee” or “you” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display “Built with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include “Llama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a “Notice” text file distributed as a part of such copies: “Llama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagement \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software “bug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit base_model: meta-llama/Llama-3.2-1B-Instruct --- # alphaduriendur/Llama-3.2-1B-Instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`meta-llama/Llama-3.2-1B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo alphaduriendur/Llama-3.2-1B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-1b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo alphaduriendur/Llama-3.2-1B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-1b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo alphaduriendur/Llama-3.2-1B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-1b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo alphaduriendur/Llama-3.2-1B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-1b-instruct-q4_k_m.gguf -c 2048 ```
yujunzhou/SFT_Advanced_Risk_Situation_Aware_llama
yujunzhou
2025-09-24T23:59:57Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T19:04:43Z
--- library_name: transformers license: llama3.1 base_model: meta-llama/Llama-3.1-8B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: SFT_Advanced_Risk_Situation_Aware_llama results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SFT_Advanced_Risk_Situation_Aware_llama This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the Advanced_Risk_Situation_Aware_llama dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 2.19.2 - Tokenizers 0.21.1
khazarai/MentalChat-16K
khazarai
2025-09-24T23:58:41Z
15
1
peft
[ "peft", "safetensors", "mental", "unsloth", "sft", "trl", "transformers", "text-generation", "conversational", "en", "dataset:ShenLab/MentalChat16K", "base_model:unsloth/Llama-3.2-1B-Instruct", "base_model:adapter:unsloth/Llama-3.2-1B-Instruct", "license:mit", "region:us" ]
text-generation
2025-09-12T20:20:53Z
--- base_model: unsloth/Llama-3.2-1B-Instruct library_name: peft license: mit datasets: - ShenLab/MentalChat16K language: - en tags: - mental - unsloth - sft - trl - transformers pipeline_tag: text-generation --- # Model Card for Model MentalChat-16K ## Model Details This model is a fine-tuned version of Llama-3.2-1B-Instruct, optimized for empathetic and supportive conversations in the mental health domain. It was trained on the ShenLab/MentalChat16K dataset, which includes over 16,000 counseling-style Q&A examples, combining real clinical paraphrases and synthetic mental health dialogues. The model is designed to understand and respond to emotionally nuanced prompts related to stress, anxiety, relationships, and personal well-being. ### Model Description - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model:** unsloth/Llama-3.2-1B-Instruct - **Dataset:** ShenLab/MentalChat16K ## Uses This model is intended for research and experimentation in AI-driven mental health support. Key use cases include: - Mental health chatbot prototypes - Empathy-focused dialogue agents - Benchmarking LLMs on emotional intelligence and counseling-style prompts - Educational or training tools in psychology or mental health communication This model is NOT intended for clinical diagnosis, therapy, or real-time intervention. It must not replace licensed mental health professionals. ## Bias, Risks, and Limitations - Biases: - The real interview data is biased toward caregivers (mostly White, female, U.S.-based), which may affect the model’s cultural and demographic generalizability. - The synthetic dialogues are generated by GPT-3.5, which may introduce linguistic and cultural biases from its pretraining. - Limitations: - The base model, Qwen2.5-0.5B-Instruct, is a small model (0.5B parameters), limiting depth of reasoning and nuanced understanding. - Not suitable for handling acute mental health crises or emergency counseling. - Responses may lack therapeutic rigor or miss subtle psychological cues. - May produce hallucinated or inaccurate mental health advice. ## How to Get Started with the Model Use the code below to get started with the model. ```python from huggingface_hub import login from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel tokenizer = AutoTokenizer.from_pretrained("unsloth/Llama-3.2-1B-Instruct",) base_model = AutoModelForCausalLM.from_pretrained( "unsloth/Llama-3.2-1B-Instruct", device_map={"": 0} ) model = PeftModel.from_pretrained(base_model,"khazarai/MentalChat-16K") system = """You are a helpful mental health counselling assistant, please answer the mental health questions based on the patient's description. The assistant gives helpful, comprehensive, and appropriate answers to the user's questions. """ question = """ I've been feeling overwhelmed by my responsibilities at work and caring for my aging parents. I've reached a point where I don't know what else I can do, and I'm struggling to communicate this to my boss and family members. I feel guilty for even considering saying no, but I know I need to take care of myself. """ messages = [ {"role" : "system", "content" : system}, {"role" : "user", "content" : question} ] text = tokenizer.apply_chat_template( messages, tokenize = False, add_generation_prompt = True, ) from transformers import TextStreamer _ = model.generate( **tokenizer(text, return_tensors = "pt").to("cuda"), max_new_tokens = 900, temperature = 0.7, top_p = 0.8, top_k = 20, streamer = TextStreamer(tokenizer, skip_prompt = True), ) ``` **For pipeline:** ```python from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer from peft import PeftModel tokenizer = AutoTokenizer.from_pretrained("unsloth/Llama-3.2-1B-Instruct") base_model = AutoModelForCausalLM.from_pretrained("unsloth/Llama-3.2-1B-Instruct") model = PeftModel.from_pretrained(base_model, "khazarai/MentalChat-16K") system = """You are a helpful mental health counselling assistant, please answer the mental health questions based on the patient's description. The assistant gives helpful, comprehensive, and appropriate answers to the user's questions. """ question = """ I've been feeling overwhelmed by my responsibilities at work and caring for my aging parents. I've reached a point where I don't know what else I can do, and I'm struggling to communicate this to my boss and family members. I feel guilty for even considering saying no, but I know I need to take care of myself. """ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) messages = [ {"role" : "system", "content" : system}, {"role": "user", "content": question} ] pipe(messages) ``` ### Framework versions - PEFT 0.15.2
schroneko/nanoGPT-10M
schroneko
2025-09-24T23:58:32Z
0
0
null
[ "region:us" ]
null
2025-09-24T23:57:34Z
# nanoGPT Shakespeare Model このリポジトリには、シェイクスピアデータセットで事前学習した nanoGPT モデル (ckpt.pt) と推論コード (sample.py) が含まれています。 nanoGPT は Andrej Karpathy による GPT アーキテクチャのミニマリスト実装で、教育目的で設計されています。小規模な言語モデルのトレーニングとサンプリングを簡単に実現します。 https://github.com/karpathy/nanoGPT のリポジトリに従って、シェイクスピアの文字レベルモデルを Mac Studio でトレーニングしました。パラメータ数は約10Mです。 ## 使い方 PyTorch でモデルをロードし、sample.py を使用してテキスト生成を行います。 例: ```python python sample.py --out_dir=out-shakespeare-char --device=mps ```
openagi-agi/ShaNet-2-Beta
openagi-agi
2025-09-24T23:58:03Z
0
0
null
[ "text-generation", "en", "dataset:lmsys/lmsys-chat-1m", "license:gpl-3.0", "region:us" ]
text-generation
2025-09-24T23:14:41Z
--- license: gpl-3.0 datasets: - lmsys/lmsys-chat-1m language: - en pipeline_tag: text-generation --- <img src="ShaNet.png" alt="ShaNet Banner" width="100%"> # ShaNet ShaNet is a Generative Pre-trained Transformer (GPT) trained on conversational data designed to understand and generate human-like text based on the input it receives. This model can be used for various applications such as chatbots, content generation, and more. ## Features - **Conversational Understanding**: Trained on a diverse dataset to understand context and nuances in conversations. - **Text Generation**: Capable of generating coherent and contextually relevant text. - **Customizable**: Can be fine-tuned for specific applications or domains. - **Open Source**: Available for use and modificatio under the permissive GPL-3.0 license. ## Installation To install ShaNet, you can downlaod all files and run the `chat.py` script. Note: - The train and config code are modified versions of train.py and configurator.py from [NanoGPT](https://github.com/karpathy/nanoGPT).
khazarai/Chemistry-R1
khazarai
2025-09-24T23:56:17Z
39
1
peft
[ "peft", "safetensors", "trl", "unsloth", "sft", "transformers", "text-generation", "conversational", "en", "dataset:camel-ai/loong", "base_model:unsloth/Qwen3-0.6B", "base_model:adapter:unsloth/Qwen3-0.6B", "license:mit", "region:us" ]
text-generation
2025-09-16T20:40:05Z
--- base_model: unsloth/Qwen3-0.6B library_name: peft license: mit datasets: - camel-ai/loong language: - en pipeline_tag: text-generation tags: - trl - unsloth - sft - transformers --- # Model Card for Chemistry-R1 ## Model Details - **Name:** Chemistry-R1 - **Base Model:** Qwen3-0.6B - **Fine-Tuning Dataset:** ~2,000 chemistry reasoning problems, where solutions are computed step-by-step using Python code. - **Training Objective:** The model was fine-tuned to reason through chemistry problems, generate step-by-step solutions using Python, and compute the final answer programmatically. - Capabilities: - Solves quantitative chemistry problems using code-based reasoning. - Generates intermediate steps to explain calculations and chemical logic. - Can output results as numerical answers, chemical equations, or calculated values. ## Uses ### Direct Use This model is designed for: - Educational Assistance: Helping students and educators solve and explain chemistry problems programmatically. - Chemistry Problem Solving: Generating step-by-step solutions for quantitative chemistry calculations. - Automated Reasoning Pipelines: Integrating into applications where chemistry computations need algorithmic precision. ## Bias, Risks, and Limitations - Numerical Precision: The model may occasionally produce incorrect numerical results due to floating-point approximations or coding logic errors. Always verify critical calculations. - Scope of Chemistry Knowledge: Fine-tuned on ~2K problems, so it may fail on very advanced or niche chemistry topics not represented in the training set. - Python Execution Needed: The model generates Python code to solve problems, so it relies on a safe execution environment for computing final answers. It may not directly provide plain-text solutions without executing code. - No Safety Checks: It does not account for chemical hazards, experimental safety, or lab protocols—only theoretical reasoning. - Limited Generalization: Performance may degrade on problems requiring multi-step reasoning beyond the patterns seen in the fine-tuning dataset. ## How to Get Started with the Model Use the code below to get started with the model. ```python from huggingface_hub import login from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-0.6B",) base_model = AutoModelForCausalLM.from_pretrained( "unsloth/Qwen3-0.6B", device_map={"": 0} ) model = PeftModel.from_pretrained(base_model,"khazarai/Chemistry-R1") question = """ A bowl contains 10 jellybeans (four red, one blue and five white). If you pick three jellybeans from the bowl at random and without replacement, what is the probability that exactly two will be red? Express your answer as a common fraction """ messages = [ {"role" : "user", "content" : question} ] text = tokenizer.apply_chat_template( messages, tokenize = False, add_generation_prompt = True, enable_thinking = True, ) from transformers import TextStreamer _ = model.generate( **tokenizer(text, return_tensors = "pt").to("cuda"), max_new_tokens = 1500, temperature = 0.6, top_p = 0.95, top_k = 20, streamer = TextStreamer(tokenizer, skip_prompt = True), ) ``` **For pipeline:** ```python from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer from peft import PeftModel tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-0.6B") base_model = AutoModelForCausalLM.from_pretrained("unsloth/Qwen3-0.6B") model = PeftModel.from_pretrained(base_model, "khazarai/Chemistry-R1") question=""" A bowl contains 10 jellybeans (four red, one blue and five white). If you pick three jellybeans from the bowl at random and without replacement, what is the probability that exactly two will be red? Express your answer as a common fraction? """ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) messages = [ {"role": "user", "content": question} ] pipe(messages) ``` ### Framework versions - PEFT 0.15.2
danchev/ibm-granite-docling-258M-GGUF
danchev
2025-09-24T23:55:38Z
0
0
null
[ "gguf", "image-text-to-text", "en", "base_model:ibm-granite/granite-docling-258M", "base_model:quantized:ibm-granite/granite-docling-258M", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
image-text-to-text
2025-09-24T22:44:12Z
--- license: apache-2.0 language: - en base_model: - ibm-granite/granite-docling-258M pipeline_tag: image-text-to-text tags: - gguf --- ## ibm-granite-docling-258M-GGUF This is the GGUF version of the [ibm-granite/granite-docling-258M](https://huggingface.co/ibm-granite/granite-docling-258M) model. ### Model Information - **Model Name**: granite-docling-258M - **Base Model**: ibm-granite/granite-docling-258M - **License**: Apache-2.0 - **Pipeline Tag**: image-text-to-text - **Language**: English - **Model Size**: 258M - **Model Format**: GGUF ### Description Granite Docling is a family of instruction-tuned models designed for document understanding tasks. These models are fine-tuned on a diverse set of tasks including document classification, information extraction, and question answering. The models are optimized for performance on document-centric tasks and can handle a variety of document formats and layouts. ### Usage You need [this](https://github.com/ggml-org/llama.cpp/pull/16206) version of the `llama.cpp` to run these GGUF models. ```bash git clone [email protected]:gabe-l-hart/llama.cpp.git GraniteDocling cd GraniteDocling git checkout GraniteDocling cmake -B build cmake --build build --config Release -j $(nproc) ``` Once you have `llama.cpp` set up, you can use the following command to run the model: ```bash ./build/bin/llama-server -hf danchev/ibm-granite-docling-258M-GGUF ``` You can then send requests to the server using `curl`. Here is an example request: ```bash curl -X POST "http://localhost:8080/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ibm-granite/granite-docling-258M", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' ```
corzamennav/blockassist-bc-territorial_wild_antelope_1758757975
corzamennav
2025-09-24T23:54:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-24T23:53:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
khazarai/Medical-QA
khazarai
2025-09-24T23:54:11Z
13
1
peft
[ "peft", "safetensors", "trl", "unsloth", "sft", "transformers", "medical", "text-generation", "conversational", "en", "dataset:medalpaca/medical_meadow_medical_flashcards", "base_model:unsloth/Qwen3-0.6B", "base_model:adapter:unsloth/Qwen3-0.6B", "license:mit", "region:us" ]
text-generation
2025-09-20T15:49:36Z
--- base_model: unsloth/Qwen3-0.6B library_name: peft license: mit datasets: - medalpaca/medical_meadow_medical_flashcards language: - en pipeline_tag: text-generation tags: - trl - unsloth - sft - transformers - medical --- # Model Card for Medical-QA ## Model Details This model is a fine-tuned version of Qwen3-0.6B on a 34K medical Q&A dataset derived from the Anki Medical Curriculum flashcards. It is designed to assist with medical education and exam preparation, offering concise and contextually relevant answers to short medical questions. - **Base Model:** Qwen3-0.6B - **Fine-tuned on:** 34,000 question-answer pairs - **Domain:** Medicine & Medical Education - **Languages:** English - **License:** MIT ## Uses ### Direct Use - Primary use case: Medical Q&A for students, exam preparation, and knowledge review. - Suitable for interactive learning assistants or educational chatbots. - Not intended for real-world clinical decision-making or replacing professional medical advice. ## Bias, Risks, and Limitations - The model’s knowledge is constrained to the dataset scope (flashcard-style Q&A). - Responses are short and exam-style rather than detailed clinical explanations. - Should not be relied upon for actual patient care, treatment decisions, or emergency use. ## How to Get Started with the Model Use the code below to get started with the model. ```python from huggingface_hub import login from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-0.6B",) base_model = AutoModelForCausalLM.from_pretrained( "unsloth/Qwen3-0.6B", device_map={"": 0} ) model = PeftModel.from_pretrained(base_model,"khazarai/Medical-QA") system = "Answer this question truthfully" question = """ What can β-blockers cause or exacerbate due to excessive AV nodal inhibition? """ messages = [ {"role" : "system", "content" : system}, {"role" : "user", "content" : question} ] text = tokenizer.apply_chat_template( messages, tokenize = False, add_generation_prompt = True, enable_thinking = False, ) from transformers import TextStreamer _ = model.generate( **tokenizer(text, return_tensors = "pt").to("cuda"), max_new_tokens = 512, temperature = 0.7, top_p = 0.8, top_k = 20, streamer = TextStreamer(tokenizer, skip_prompt = True), ) ``` **For pipeline:** ```python from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer from peft import PeftModel tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-0.6B") base_model = AutoModelForCausalLM.from_pretrained("unsloth/Qwen3-0.6B") model = PeftModel.from_pretrained(base_model, "khazarai/Medical-QA") system = "Answer this question truthfully" question = """ What can β-blockers cause or exacerbate due to excessive AV nodal inhibition? """ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) messages = [ {"role": "system", "content": system} {"role": "user", "content": question} ] pipe(messages) ``` ## Training Details ### Training Data The dataset is based on Anki Medical Curriculum flashcards, created and updated by medical students. These flashcards cover the entire medical curriculum, including but not limited to: - Anatomy - Physiology - Pathology - Pharmacology - Clinical knowledge and skills The flashcards typically provide succinct summaries and mnemonics to support learning and retention. ### Framework versions - PEFT 0.15.2
yueyulin/rwkv_asr
yueyulin
2025-09-24T23:47:32Z
0
0
null
[ "automatic-speech-recognition", "zh", "en", "dataset:wenet-e2e/wenetspeech", "dataset:MLCommons/peoples_speech", "base_model:fla-hub/rwkv7-0.4B-g1a", "base_model:finetune:fla-hub/rwkv7-0.4B-g1a", "region:us" ]
automatic-speech-recognition
2025-09-21T12:46:43Z
--- datasets: - wenet-e2e/wenetspeech - MLCommons/peoples_speech language: - zh - en base_model: - openai/whisper-large-v3 - fla-hub/rwkv7-0.4B-g1a pipeline_tag: automatic-speech-recognition --- RWKV ASR is to add audio modality to RWKV7 model which means RWKV7 base model stays unaltered. The model trained a 0.1B rwkv model to convert whisper-large-v3 encoder's latents to RWKV7's latents space which convert the speech into texts according to the text instruction. This design keeps all abilities of LLM and is easy to add more functions to the model such as speech to speech, speech translation, etc. You name it! The architect looks like: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63a00aa29f1f2baab2034cf8/4bM4sOb-0z5bNr1Ng7MhY.png) # Usage Inference sample code is: https://github.com/yynil/RWKVTTS/blob/respark/model/test/test_asr_whisper.py 1. Download the weights in this repo. Please note: 10k steps checkpoint training costs around 5k hours which is a very small amount of data and we are continuing training. Also it proves this mode needs less data to achieve a usable stage. 2. Download the configuration directories in this repo. Assume you store them to directory YOUR_DIR. 2. Run the script like: ```bash python model/test/test_asr_whisper.py --whisper_path $YOUR_DIR/whisper-large-v3/ --audio_lm_path $YOUR_DIR/rwkv7_0.1b_audio_lm_latents/ --llm_path $YOUR_DIR/rwkv7-0.4B-g1a/ --ckpt_path $YOUR_DIR/rwkvasr_whisper_10k.model.bin --audio_path new.mp3 ``` The output looks like: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63a00aa29f1f2baab2034cf8/hvo21_B53PnCdybRlIDyi.png) or in English mode ```bash python model/test/test_asr_whisper.py --whisper_path $YOUR_DIR/whisper-large-v3/ --audio_lm_path $YOUR_DIR/rwkv7_0.1b_audio_lm_latents/ --llm_path /home/yueyulin/models/rwkv7-0.4B-g1a/ --ckpt_path $YOUR_DIR/rwkvasr_whisper_10k.model.bin --audio_path eng2.wav --language english ``` The output looks like: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63a00aa29f1f2baab2034cf8/3TpFly4KIM7-5C7W3jM0b.png)
lynn-mikami/wan-testing
lynn-mikami
2025-09-24T23:46:21Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-07-18T10:20:30Z
--- license: apache-2.0 ---
deadman44/Qwen-Image_LoRA
deadman44
2025-09-24T23:44:40Z
0
6
null
[ "text-to-image", "qwen image", "safetensors", "en", "license:apache-2.0", "region:us" ]
text-to-image
2025-09-18T00:50:15Z
--- license: apache-2.0 pipeline_tag: text-to-image language: - en tags: - text-to-image - qwen image - safetensors --- <style> .title{ font-size: 2.5em; letter-spacing: 0.01em; padding: 0.5em 0; } .thumbwidth{ max-width: 180px; } .font_red{ color:red; } .font_blue{ color:blue; } .font_grey{ color: #aaaaaa; } </style> # models - Add [lora_qwen_myjc_v01](#myjc) (<span class="font_blue">Qwen-Image LoRA</span>):2025-09-23<br /> --- <br> # Sample Workflow Qwen-Image-gguf ### - [Workflow for myxx series LoRA](https://huggingface.co/deadman44/Qwen-Image_LoRA/raw/main/workflow/qwen_image.json)<br> - <span class="font_blue">reccomended</span><br/> ### - [Workflow for Double](https://huggingface.co/deadman44/Qwen-Image_LoRA/raw/main/workflow/qwen_image_Double.json)<br> - <span class="font_blue">I feel better</span><br/> <br> Qwen-Image-Edit-2509-GGUF ### - [Workflow Image Edit](https://huggingface.co/deadman44/Qwen-Image_LoRA/raw/main/workflow/qwen_image_edit.json)<br> - <span class="font_blue">Qwen-Image-Edit-2509 workflow</span> <br> ## - reccomended models - [Qwen-Image-gguf](https://huggingface.co/city96/Qwen-Image-gguf/tree/main) - [Qwen-Image-Edit-2509-GGUF](https://huggingface.co/QuantStack/Qwen-Image-Edit-2509-GGUF/tree/main) - [Text Encoder](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/split_files/text_encoders) - [VAE](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/split_files/vae) - [Lightx2v LoRA](https://huggingface.co/lightx2v/Qwen-Image-Lightning/tree/main) --- <a id="myjc"></a> <h1 class="title"> <span>lora_qwen_myjc_v01</span> </h1> -<span class="font_red">Lora for Qwen-Image</span><br/> -<span class="font_blue">natural Japanese JC face</span><br/> <br/> <br/> # Download [Download: myjc_v01](https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/lora_qwen_myjc_v01.safetensors?download=true) <br /> <br /> # Trigger ```bash myjc, japanese/european, photorealistic and 13-15yo ``` <br /> # Sample prompt <div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;"> <a href="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923154926_qwen_image_00001_.png" target="_blank"> <img src="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923154926_qwen_image_00001_.jpg" alt="T2I" style="width: 360px; height: auto; object-fit: contain; border: 1px solid #ccc;"> </a> </div> ```bash 15yo, myjc, japanese, a Japanese schoolgirl in uniform holding a flip board with "qwen" written on it, smiling awkwardly after a small clumsy mistake, like nearly tripping or dropping her pen, surrounded by classmates laughing gently, warm afternoon sunlight, cherry blossoms in the background, cinematic composition, soft shadows, emotionally expressive, humorous and heartwarming mood ``` <br/> <div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;"> <a href="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923155356_qwen_image_00001_.png" target="_blank"> <img src="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923155356_qwen_image_00001_.jpg" alt="T2I" style="width: 360px; height: auto; object-fit: contain; border: 1px solid #ccc;"> </a> </div> ```bash 15yo, myjc, japanese, black hair, This photograph of a girl sitting on a bench in a train. She has straight long black twintails and is wearing a short sleeve white shirt with a collar and a grey pleated skirt. Her posture is relaxed and her expression is neutral. She holds a smartphone in her right hand looking at camera. A black handbag is placed on her lap. The background shows the interior of a train car with metallic walls and a green and white patterned seat. The lighting is dim and the overall atmosphere is typical of a public transportation setting. The image is candid and captures a moment of casualness and comfort. ``` <br/> <strng>Normal Node (reccomended)</strong> <div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;"> <a href="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923161905_qwen_image_00001_.png" target="_blank"> <img src="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923161905_qwen_image_00001_.jpg" alt="T2I" style="width: 360px; height: auto; object-fit: contain; border: 1px solid #ccc;"> </a> </div> <strng>Triple Node (experimental)</strong> <div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;"> <a href="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923162424_qwen_image_00001_.png" target="_blank"> <img src="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923162424_qwen_image_00001_.jpg" alt="T2I" style="width: 360px; height: auto; object-fit: contain; border: 1px solid #ccc;"> </a> </div> ```bash 14yo, myjc, japanese, straight long hair, bangs, smile, The photograph of a young girl in casual uniform lie on your back on a table surrounded by several men. The background is a dark restaurant and the girl is illuminated by lights. The image is viewed from an angle. ``` <br/> <div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;"> <a href="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923160144_qwen_image_00001_.png" target="_blank"> <img src="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923160144_qwen_image_00001_.jpg" alt="T2I" style="width: 480px; height: auto; object-fit: contain; border: 1px solid #ccc;"> </a> </div> ```bash 15yo, myjc, japanese, five schoolgirls in sailor uniforms striking playful sentai-style poses on a quiet urban street, each with a different hairstyle: ponytail, short bob, twin braids, loose long hair, and side bun, natural lighting, casual atmosphere, no special effects, soft shadows, relaxed expressions, subtle smiles, everyday setting with buildings and trees in the background, cinematic composition, emotionally expressive, group coordination with individuality ``` <br/> ### Image Edit <strong>reference images</strong> <div style="display: flex; flex-direction: row; gap: 12px; margin-bottom: 32px;"> <a href="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250924173208_qwen_image_00001_.png" target="_blank"> <img src="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250924173208_qwen_image_00001_.jpg" alt="T2I" style="width: 180px; height: auto; object-fit: contain; border: 1px solid #ccc;"> </a> <a href="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250924173712_qwen_image_00001_.png" target="_blank"> <img src="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250924173712_qwen_image_00001_.jpg" alt="T2I" style="width: 180px; height: auto; object-fit: contain; border: 1px solid #ccc;"> </a> <a href="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250924173838_qwen_image_00001_.png" target="_blank"> <img src="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250924173838_qwen_image_00001_.jpg" alt="T2I" style="width: 300px; height: auto; object-fit: contain; border: 1px solid #ccc;"> </a> </div> <strong>Image Edit</strong> <div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;"> <a href="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250924182939_qwen_image_00001_.png" target="_blank"> <img src="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250924182939_qwen_image_00001_.jpg" alt="T2I" style="width: 360px; height: auto; object-fit: contain; border: 1px solid #ccc;"> </a> </div> ```bash A (girl in image1) is wearing the (outfit from image2) and standing with her hands on her waist in the location shown in (image3 background). ``` <br/> ---
corzamennav/blockassist-bc-territorial_wild_antelope_1758757359
corzamennav
2025-09-24T23:44:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-24T23:43:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tomal66/gemma3-1b-banfake-T3
tomal66
2025-09-24T23:42:07Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-24T23:42:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
foobar2333/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated-Q4_K_M-GGUF
foobar2333
2025-09-24T23:39:20Z
0
0
transformers
[ "transformers", "gguf", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:huihui-ai/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated", "base_model:quantized:huihui-ai/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T22:22:05Z
--- license: apache-2.0 language: - en base_model: huihui-ai/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated pipeline_tag: text-generation library_name: transformers tags: - abliterated - uncensored - llama-cpp - gguf-my-repo --- # ggml-org/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated-Q4_K_M-GGUF This model was converted to GGUF format from [`huihui-ai/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated`](https://huggingface.co/huihui-ai/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo ggml-org/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated-Q4_K_M-GGUF --hf-file huihui-tongyi-deepresearch-30b-a3b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo ggml-org/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated-Q4_K_M-GGUF --hf-file huihui-tongyi-deepresearch-30b-a3b-abliterated-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo ggml-org/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated-Q4_K_M-GGUF --hf-file huihui-tongyi-deepresearch-30b-a3b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo ggml-org/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated-Q4_K_M-GGUF --hf-file huihui-tongyi-deepresearch-30b-a3b-abliterated-q4_k_m.gguf -c 2048 ```
mosesshah/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-squinting_grassy_prawn
mosesshah
2025-09-24T23:39:06Z
160
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am squinting_grassy_prawn", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T01:27:49Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am squinting_grassy_prawn --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
eequalsmcsquared/q-FrozenLake-v1-4x4-noSlippery
eequalsmcsquared
2025-09-24T23:34:41Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-09-24T23:34:39Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="eequalsmcsquared/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
corzamennav/blockassist-bc-territorial_wild_antelope_1758756743
corzamennav
2025-09-24T23:33:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-24T23:33:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
haihp02/qweeeen
haihp02
2025-09-24T23:33:18Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "dpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T20:32:33Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA-GGUF
mradermacher
2025-09-24T23:33:09Z
0
0
transformers
[ "transformers", "gguf", "ita", "italian", "anita", "magistral", "24b", "uniba", "bari", "italy", "italia", "Conversational", "LLaMantino", "en", "it", "base_model:m-polignano/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA", "base_model:quantized:m-polignano/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-24T19:31:19Z
--- base_model: m-polignano/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA language: - en - it library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - ita - italian - anita - magistral - 24b - uniba - bari - italy - italia - Conversational - LLaMantino --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/m-polignano/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA-GGUF/resolve/main/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA.Q2_K.gguf) | Q2_K | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA-GGUF/resolve/main/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA.Q3_K_S.gguf) | Q3_K_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA-GGUF/resolve/main/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA-GGUF/resolve/main/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA.Q3_K_L.gguf) | Q3_K_L | 12.5 | | | [GGUF](https://huggingface.co/mradermacher/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA-GGUF/resolve/main/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA.IQ4_XS.gguf) | IQ4_XS | 13.0 | | | [GGUF](https://huggingface.co/mradermacher/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA-GGUF/resolve/main/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA-GGUF/resolve/main/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA-GGUF/resolve/main/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA.Q5_K_S.gguf) | Q5_K_S | 16.4 | | | [GGUF](https://huggingface.co/mradermacher/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA-GGUF/resolve/main/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA.Q5_K_M.gguf) | Q5_K_M | 16.9 | | | [GGUF](https://huggingface.co/mradermacher/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA-GGUF/resolve/main/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA.Q6_K.gguf) | Q6_K | 19.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA-GGUF/resolve/main/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF
mradermacher
2025-09-24T23:33:08Z
0
0
transformers
[ "transformers", "gguf", "programming", "code generation", "code", "coding", "coder", "chat", "brainstorm", "qwen", "qwen3", "qwencoder", "brainstorm 20x", "creative", "all uses cases", "Jan-V1", "horror", "science fiction", "fantasy", "Star Trek", "Star Trek Original", "Star Trek The Next Generation", "Star Trek Deep Space Nine", "Star Trek Voyager", "Star Trek Enterprise", "Star Trek Discovery.", "finetune", "thinking", "reasoning", "unsloth", "6x6B", "moe", "mixture of experts", "en", "dataset:DavidAU/horror-nightmare1", "dataset:DavidAU/ST-Org", "dataset:DavidAU/ST-TNG", "dataset:DavidAU/ST-DS9", "dataset:DavidAU/ST-VOY", "dataset:DavidAU/ST-ENT", "dataset:DavidAU/ST-DIS", "base_model:DavidAU/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B", "base_model:quantized:DavidAU/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-09-24T20:18:48Z
--- base_model: DavidAU/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B datasets: - DavidAU/horror-nightmare1 - DavidAU/ST-Org - DavidAU/ST-TNG - DavidAU/ST-DS9 - DavidAU/ST-VOY - DavidAU/ST-ENT - DavidAU/ST-DIS language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - programming - code generation - code - coding - coder - chat - code - chat - brainstorm - qwen - qwen3 - qwencoder - brainstorm 20x - creative - all uses cases - Jan-V1 - horror - science fiction - fantasy - Star Trek - Star Trek Original - Star Trek The Next Generation - Star Trek Deep Space Nine - Star Trek Voyager - Star Trek Enterprise - Star Trek Discovery. - finetune - thinking - reasoning - unsloth - 6x6B - moe - mixture of experts - finetune --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/DavidAU/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-IQ1_S.gguf) | i1-IQ1_S | 5.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-IQ1_M.gguf) | i1-IQ1_M | 6.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-IQ2_S.gguf) | i1-IQ2_S | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-IQ2_M.gguf) | i1-IQ2_M | 8.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 9.2 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-Q2_K.gguf) | i1-Q2_K | 9.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 10.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 11.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-IQ3_S.gguf) | i1-IQ3_S | 11.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-IQ3_M.gguf) | i1-IQ3_M | 11.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 12.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 13.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-Q4_0.gguf) | i1-Q4_0 | 15.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 15.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 16.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-Q4_1.gguf) | i1-Q4_1 | 16.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 18.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 18.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B-i1-GGUF/resolve/main/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-256k-ctx-36B.i1-Q6_K.gguf) | i1-Q6_K | 21.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
ethanCSL/svla_color_test
ethanCSL
2025-09-24T23:24:58Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:ethanCSL/color_test", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-09-24T23:23:45Z
--- base_model: lerobot/smolvla_base datasets: ethanCSL/color_test library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - lerobot - smolvla - robotics --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
mradermacher/shisa-v2-llama3.1-405b-GGUF
mradermacher
2025-09-24T23:19:30Z
0
0
transformers
[ "transformers", "ja", "en", "ko", "zh", "dataset:shisa-ai/shisa-v2-sharegpt", "dataset:shisa-ai/deepseekv3-ultrafeedback-armorm", "base_model:shisa-ai/shisa-v2-llama3.1-405b", "base_model:finetune:shisa-ai/shisa-v2-llama3.1-405b", "license:llama3.1", "endpoints_compatible", "region:us" ]
null
2025-09-24T04:54:28Z
--- base_model: shisa-ai/shisa-v2-llama3.1-405b datasets: - shisa-ai/shisa-v2-sharegpt - shisa-ai/deepseekv3-ultrafeedback-armorm language: - ja - en - ko - zh library_name: transformers license: llama3.1 model_name: shisa-v2-llama3.1-405b mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/shisa-ai/shisa-v2-llama3.1-405b <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#shisa-v2-llama3.1-405b-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [PART 1](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q2_K.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q2_K.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q2_K.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q2_K.gguf.part4of4) | Q2_K | 149.4 | | | [PART 1](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q3_K_S.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q3_K_S.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q3_K_S.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q3_K_S.gguf.part4of4) | Q3_K_S | 175.3 | | | [PART 1](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q3_K_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q3_K_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q3_K_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q3_K_M.gguf.part4of4) | Q3_K_M | 195.5 | lower quality | | [P1](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q3_K_L.gguf.part1of5) [P2](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q3_K_L.gguf.part2of5) [P3](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q3_K_L.gguf.part3of5) [P4](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q3_K_L.gguf.part4of5) [P5](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q3_K_L.gguf.part5of5) | Q3_K_L | 212.9 | | | [P1](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q4_K_S.gguf.part1of5) [P2](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q4_K_S.gguf.part2of5) [P3](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q4_K_S.gguf.part3of5) [P4](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q4_K_S.gguf.part4of5) [P5](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q4_K_S.gguf.part5of5) | Q4_K_S | 230.6 | fast, recommended | | [P1](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q4_K_M.gguf.part1of5) [P2](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q4_K_M.gguf.part2of5) [P3](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q4_K_M.gguf.part3of5) [P4](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q4_K_M.gguf.part4of5) [P5](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q4_K_M.gguf.part5of5) | Q4_K_M | 243.2 | fast, recommended | | [P1](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q5_K_S.gguf.part1of6) [P2](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q5_K_S.gguf.part2of6) [P3](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q5_K_S.gguf.part3of6) [P4](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q5_K_S.gguf.part4of6) [P5](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q5_K_S.gguf.part5of6) [P6](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q5_K_S.gguf.part6of6) | Q5_K_S | 279.4 | | | [P1](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q5_K_M.gguf.part1of6) [P2](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q5_K_M.gguf.part2of6) [P3](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q5_K_M.gguf.part3of6) [P4](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q5_K_M.gguf.part4of6) [P5](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q5_K_M.gguf.part5of6) [P6](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q5_K_M.gguf.part6of6) | Q5_K_M | 286.7 | | | [P1](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q6_K.gguf.part1of7) [P2](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q6_K.gguf.part2of7) [P3](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q6_K.gguf.part3of7) [P4](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q6_K.gguf.part4of7) [P5](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q6_K.gguf.part5of7) [P6](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q6_K.gguf.part6of7) [P7](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q6_K.gguf.part7of7) | Q6_K | 333.0 | very good quality | | [P1](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q8_0.gguf.part1of9) [P2](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q8_0.gguf.part2of9) [P3](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q8_0.gguf.part3of9) [P4](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q8_0.gguf.part4of9) [P5](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q8_0.gguf.part5of9) [P6](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q8_0.gguf.part6of9) [P7](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q8_0.gguf.part7of9) [P8](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q8_0.gguf.part8of9) [P9](https://huggingface.co/mradermacher/shisa-v2-llama3.1-405b-GGUF/resolve/main/shisa-v2-llama3.1-405b.Q8_0.gguf.part9of9) | Q8_0 | 431.3 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
lsBubble/Qwen3-0.6B-Gensyn-Swarm-marine_regal_rat
lsBubble
2025-09-24T23:19:24Z
24
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am marine_regal_rat", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T12:40:13Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am marine_regal_rat --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
paul-stansifer/qw3-qwen3-8b-1x2e-4
paul-stansifer
2025-09-24T23:18:57Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "qwen3", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-24T22:39:14Z
--- base_model: unsloth/qwen3-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** paul-stansifer - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
yujunzhou/SFT_Advanced_Risk_Situation_Aware_Qwen3_4B_Base
yujunzhou
2025-09-24T23:18:05Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen3-4B-Base", "base_model:finetune:Qwen/Qwen3-4B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T20:50:23Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen3-4B-Base tags: - llama-factory - full - generated_from_trainer model-index: - name: SFT_Advanced_Risk_Situation_Aware_Qwen3_4B_Base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SFT_Advanced_Risk_Situation_Aware_Qwen3_4B_Base This model is a fine-tuned version of [Qwen/Qwen3-4B-Base](https://huggingface.co/Qwen/Qwen3-4B-Base) on the Advanced_Risk_Situation_Aware_Qwen3_4B_Base dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 2.19.2 - Tokenizers 0.21.1
Jbautistas/whisper-large-v3-lora-medical
Jbautistas
2025-09-24T23:16:31Z
19
0
null
[ "safetensors", "whisper", "es", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-09-03T20:34:45Z
--- finetuned_from: openai/whisper-large-v3 metrics: - name: wer type: wer value: 54.134929270946685 tasks: - automatic-speech-recognition language: - es license: apache-2.0 --- # Whisper Large v3 - LoRA Fine-Tuned (Español Médico) Este modelo es un fine-tuning de `openai/whisper-large-v3` usando LoRA con datos médicos en español. ## Métricas - **WER (test set)**: 54.13 ## Dataset Entrenado con un dataset privado (~2 horas de audio médico en español). ## Uso ```python from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="Jbautistas/whisper-large-v3-lora-medical")
amonamarth/moneta_llm_14B_fp16_v2
amonamarth
2025-09-24T23:15:57Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Qwen2.5-14B", "base_model:finetune:unsloth/Qwen2.5-14B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T23:03:32Z
--- base_model: unsloth/Qwen2.5-14B tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** amonamarth - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-14B This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
muooon/EmoNAVI
muooon
2025-09-24T23:13:26Z
0
0
null
[ "optimizer", "adaptive-optimizer", "emotion-ai", "shadow-learning", "deep-learning", "meta-learning", "adaptive-algorithms", "stability-analysis", "en", "ja", "license:apache-2.0", "region:us" ]
null
2025-07-06T08:36:05Z
--- license: apache-2.0 language: - en - ja model_type: optimizer tags: - optimizer - adaptive-optimizer - emotion-ai - shadow-learning - deep-learning - meta-learning - adaptive-algorithms - stability-analysis --- **自動収束・自己制御・自律型 オプティマイザです** **Auto-convergence, self-control, autonomous optimizer** #### ユーザーと研究者へ/このリンクを読んでください/please click! [ユーザーと研究者へ/このリンクを読んでください/please click!](https://huggingface.co/muooon/EmoNAVI/raw/main/report-emoment.txt) emonavi挙動まとめ(日本語のみ) [emonavi挙動まとめ(日本語のみ)](https://huggingface.co/muooon/EmoNAVI/raw/main/report/emonavi%E6%8C%99%E5%8B%95%E3%81%BE%E3%81%A8%E3%82%81.txt) Gemini に見せていろいろ聞いてみました [Geminiに聞いてみた](https://huggingface.co/muooon/EmoNAVI/blob/main/Hug-Gemini-analysis(JPN).md) [Geminiに聞いてみた-02(日本語のみ)](https://huggingface.co/muooon/EmoNAVI/blob/main/emonavi-Gemini-analysis(2)(JPN).txt) I showed it to Gemini and asked her a few questions. 02 is only in Japanese - please translate by yourself. [asked Gemini](https://huggingface.co/muooon/EmoNAVI/blob/main/Hug-Gemini-analysis(ENG).md) |★| EmoNAVI、FACT、LYNX、CLAN、ZEAL、NECO、v3.0 (250825) emosens(第2世代)で解明した"高次moment"(近似)のフィードバックを適用(更新) 全て "shadow=False" です |★| EmoNAVI, FACT, LYNX, CLAN, ZEAL, NECO, updated to v3.0 (250825), Incorporates (updates) feedback on “higher moments” (approximations) clarified by emosens (2nd generation). All are “shadow=False” |★| EmoNAVI、FACT、LYNX、CLAN、ZEAL、NECO、v2.0 (250815) 更新、shadow-system の精密化(更新) |★| EmoNAVI, FACT, LYNX, CLAN, ZEAL, NECO, updated to v2.0 (250815), refinement of shadow-system (update) |★| 第2世代を公開(250801)しました。 emonavi は、新しい世代へ進化し軽量化を果たします |★| The 2nd gen was release(250801) emonavi has evolved into a new generation and become more lightweight. |★| https://github.com/muooon/EmoSens |★| clan、zeal、neco、は、shadow機能の on/off 切替えをできるようにしました |★| clan, zeal, and neco are now able to switch the shadow function on and off. |★| 大変光栄なことに Pytorch-optimizer 3.7.0 へ登録されたとのこと (250728) 関係者の皆さまに深く感謝します |★| We are very honored to have been registered in Pytorch-optimizer 3.7.0. We would like to express our deepest gratitude to everyone involved. |★| 疑似DDPシミュレーションを試したい方(Those DDP simulation) → [DDP-TEST](https://huggingface.co/muooon/EmoNAVI/blob/main/ddp-test.zip) |★| EmoFACT 公開(250716) NAVIに比べ、約1GB節約(SDXL) 感情機構は同じです |★| EmoFACT released (250716) Saves about VRAM1GB (SDXL) compared to NAVI. Emotion mechanism is the same. |★| EmoLYNX 公開(250718) 探索範囲を広く持ちます 感情機構は同じです |★| EmoLYNX Released (250718): It offers a wide exploration range, while its Emotion Mechanism remains the same. |★| EmoCLAN 公開(250720) Navi、Fact、Lynx、役割分担の統合 感情機構は同じです (Lynx:序盤と過学習傾向時、Navi:中盤と健全時、Fact:終盤と発散傾向時、を担当します) |★| EmoCLAN Open (250720) Navi, Fact, Lynx, role integration Emotional mechanism is the same (Lynx: in charge of the early stage and overlearning tendency, Navi: in charge of the middle stage and soundness, Fact: in charge of the end stage and divergence tendency) # 主題:新世代optimizer、EmoNAVIによる変革と感情学習の成果 ## Title: A New Generation Optimizer — The Innovations and Outcomes of Emotional Learning with EmoNAVI ## 副題:過去値不要で現在値から再開できる自動収束・自己制御・自律型軽量最適器の解説 ### Subtitle: A Lightweight, Self-Regulating, Autonomous Optimizer That Automatically Converges and Resumes from the Present Without Relying on Past Values ## テーマ:既存のoptimizerにないものをつくる、出来たのはニューロンスパイクの再発明でした。 ### Theme: Creating What Existing Optimizers Lack — A Reinvention of Neuronal Spiking ## 序論: 現在主流のoptimizerはさまざまに改良され簡易化を進めています、しかし依然として、 学習再開、スケジューリング、学習状態の記録や復元、等について調整の難しさや煩雑さは存在しています、 面倒なパラメータに依存せず、それらを解決する新しいアプローチを見つけたのでここで紹介します。 ## Introduction Mainstream optimizers have undergone significant improvements and simplifications in recent years. However, they still face practical challenges in areas such as resuming training, scheduling updates, and managing the recording and restoration of learning states. These issues often require tedious parameter adjustments and ad hoc workarounds. In this paper, we introduce a new approach that addresses these problems without relying on cumbersome parameter configurations. ## 本論: 今回ここで紹介するのは新世代のoptimizerです、 EMA的平滑化の概念を下地にし、独自に構築した感情的"EMA&スカラー"を中心にした"感情機構"という新しい仕組みを実現しました、 この"感情機構"は、EMA的発想を再解釈・独自拡張することで得られた新しい機構です。 EmoNAVIの独立性と革新性を紹介します。 ## Main Section In this paper, we present a new generation of optimizer. Built upon the foundation of EMA (Exponential Moving Average) smoothing, we have developed a novel mechanism called the "emotional mechanism," which centers around a unique combination of EMA and scalar dynamics. This mechanism was created by reinterpreting and independently extending the conventional EMA concept. Here, we introduce EmoNAVI—an optimizer characterized by its innovation and independence. 最初に"感情機構"と名付けた経緯と理由を記します。 生物のもつ「感情」とは、知覚と記憶の差異に基づく行動のトリガです、同様にEmoNAVIも現在と過去の差分に基づき学習の"行動"を制御する仕組みとして設計されています。 そして"感情機構"と名付けた理由のもうひとつは、この一連の動作がまるでニューロンスパイクのような動作をするからです。 この機構"感情機構"の動作を明快にした読み物、本稿末尾に記すリンク先の擬人化を読むことで簡単にご理解頂けると思います。 First, let us explain the background and reasoning behind the term “Emotion Mechanism.” In biological systems, emotions are often understood as triggers for action based on discrepancies between perception and memory. EmoNAVI was similarly designed to control its learning “behavior” by responding to differences between the present and the past. Another reason we chose the term “Emotion Mechanism” is that its operation closely resembles neuronal spiking behavior. For a more intuitive understanding of how this mechanism works, we encourage you to read the personification linked at the end of this article. 次に、"感情機構"の構成を記します、 感情機構とは、2つのEMA、スカラー、Shadow、により構成されます。 Next, we outline the structure of the “Emotion Mechanism.” This mechanism consists of two EMAs, a scalar value, and a shadow component. まず2つのEMAによる"感情EMA"について説明します、 2つのEMAで構成します、短期型と長期型です、この2つのEMAはLossを監視し判断材料を得ます、 1つめ、短期型EMAは瞬間的なシグナル(緊張)を受け持ちます 2つめ、長期型EMAは平均した過去のシグナル(安静)を受け持ちます、 この2つのEMAは次に紹介する"感情スカラー"へそれぞれの持つ判断材料を渡します First, we describe the "Emotional EMA," which consists of two components: a short-term EMA and a long-term EMA. These two EMAs continuously monitor the loss value and serve as the basis for subsequent decision-making. The short-term EMA captures rapid, momentary signals (interpreted as “tension”), while the long-term EMA reflects more averaged, historical trends (“calm”). Both EMAs pass their respective signals to the "Emotion Scalar," which will be introduced in the next section. 次に、"感情スカラー"について説明します、 前述の"感情EMA"からの信号をスカラー値に変換します、スカラー値の変化は、これら2つのEMAの差分により常に動的変化を続けます、 "感情スカラー"はoptimizerにより書き換えた学習結果の是非を判定し、 "スカラー値が一定閾値を超えたときのみ"次に紹介するShadowの配合を決めます Next, we introduce the "Emotion Scalar." It converts the signals from the previously described Emotional EMA into a scalar value, which continuously changes in response to the difference between the short-term and long-term EMA. This scalar dynamically evaluates whether the learning update performed by the optimizer should be considered appropriate. Only when the scalar exceeds a certain threshold does it trigger the next step: determining how much of the "Shadow" should be blended into the learning parameters. 次に、Shadowについて説明します、 Shadowは学習開始直後にShadowとして保存され維持されます、このShadowは"過去の穏やかな状態"の記憶です、この情報は感情機構に追従しながらゆっくりと変化し続けます、 そして"感情スカラー"に応じ決められたratioで学習結果にブレンドとして反映されます、このブレンドの配合率も感情機構により動的に変化し続けます、 Next, we describe the "Shadow." At the beginning of training, a copy of the current parameters is saved and maintained as the Shadow. This Shadow represents a memory of past calm states, and it evolves slowly over time, following the guidance of the Emotion Mechanism. When the Emotion Scalar exceeds a certain threshold, a dynamic blend ratio is computed. This ratio determines how much of the Shadow is mixed into the current parameters. The blend ratio itself is also dynamically adjusted by the Emotion Mechanism in response to ongoing learning behavior. ここまで"感情機構"の構成と役割りを説明しました、続いて"感情機構"の動作機序を見ていきましょう。 まずoptimizerの学習結果が記録されます、この時"感情機構"は緊張と安静の差分情報で書き換えの是非を判定します、 この判定により、過度の学習と判断した場合は、過去の適切な状態をブレンドすることでノイズや暴走を抑制します、 適切な学習と判断した場合は、過去をブレンドしない選択をします、これをstep毎に行います、 Now that we have explained the structure and role of the Emotion Mechanism, let us examine how it operates. At each training step, the optimizer's updated parameters are recorded. The Emotion Mechanism then evaluates whether these updates are appropriate, based on the difference between short-term “tension” and long-term “calm” signals. If the mechanism determines that the update reflects excessive learning, it suppresses potential noise or instability by blending in a suitable portion of the past stable state (Shadow). Conversely, if the update is deemed appropriate, the mechanism chooses not to apply blending. This evaluation and adjustment are performed dynamically at each training step. さらに、この判定では"信頼度"の評価をします、"感情スカラー"が一時的に大きく振れるだけでは不十分であり「この変化が本当に意味のあるものかどうか」を見極めて混合の是非を判断します。 そのため、学習の**序盤では長期の安静シグナルの蓄積が少なく信頼に値しないため混合が発動しづらく**、**終盤では短期の緊張シグナルが収束しスカラー自体が閾値に届かず動作しません**。 (学習の序盤では判定基準の過去シグナルが少ないため動作しませんし、終盤では瞬間シグナルが少ないため動作しません) このように、EmoNAVIの"感情機構"は、単なる閾値反応ではなく「揺らぎに対する信頼ある変化のみを察知して反応する」慎重な意思決定を行います。 In addition, this decision-making process includes an evaluation of "reliability." It is not sufficient for the Emotion Scalar to simply spike temporarily; the mechanism assesses whether the fluctuation truly represents a meaningful change before deciding whether blending should occur. As a result, in the **early stages of learning**, blending is unlikely to be triggered because the long-term “calm” signal has not yet accumulated enough history to be trustworthy. In the **later stages**, on the other hand, the short-term “tension” signal tends to converge, and the scalar itself fails to exceed the threshold—thus the mechanism remains inactive. (In short: the mechanism tends not to activate in the early stages due to insufficient past signal for evaluation, and in the later stages due to lack of strong instantaneous signal.) In this way, EmoNAVI’s Emotion Mechanism does not respond merely to raw thresholds, but instead performs cautious decision-making—reacting only to fluctuations that it has learned to trust. この一連の動作により学習時の過敏な反応を弛緩し不要なノイズ等を覚えないように制御します。 つまりoptimizer本来の学習率やベクトルを直接的に制御せず、感情機構の変化に応じ安定したパラメータになるよう後から調整する、 こういう流れになります。すべてを書き戻さずあくまで配合率に応じてブレンドするので学習の更新は止まらず進行は維持されます。 This series of actions helps relax hypersensitive reactions during learning and prevents the optimizer from overfitting to unnecessary noise. Rather than directly manipulating the optimizer’s learning rate or update vectors, the system instead applies corrective blending afterward—adapting parameters in response to changes detected by the Emotion Mechanism. Because it blends adjustments based on a calculated ratio rather than fully overwriting parameter values, the learning process continues smoothly without interruption. ### 感情機構の動作とスカラー変遷(学習フェーズ別の結果的挙動) | フェーズ | 状況(Loss変化) | EMAの挙動 | スカラーの変動傾向 | Shadow混合の実動作 | 感情機構としての意味ある挙動 | |----------|-----------------------|------------------------------------|--------------------------|--------------------------|--------------------------------------------| | 序盤 | 不安定・高め | Shortは鋭敏、Longは未成熟 | 大きく変動することもある | ほとんど発動しない | 判定に十分な履歴がなく、実質的に動作不可 | | 中盤 | 徐々に収束傾向 | 両EMAが意味ある差分を持つようになる | 適度な振幅で安定推移 | 条件付きで発動する | 状態に応じてブレンド補正が有効に機能 | | 終盤 | 収束・微振動 | Short ≒ Long(差分がほぼ消失) | 小さく収束 | 発動しなくなる | 静けさの合図:should_stop 条件が整う | 備考: - スカラー値は常に tanh(5 * (short - long)) で生成されます - 閾値:abs(scalar) > 0.3 で配合が始まり、> 0.6 で大きな混合比率(0.7以上)に - Shadow混合はパラメータそのものを書き戻すのではなく、部分的に配合して“追従”させる設計です - 感情スカラーの減衰=学習の「静穏化」→ 終盤に向けて should_stop の発火条件が整います ### Emotional Mechanism Behavior and Scalar Transitions (Outcome-Based Behavior by Learning Phase) | Phase | Loss Characteristics | EMA Behavior | Scalar Fluctuation Pattern | Actual Shadow Blending | Meaningful Behavior of Emotion Mechanism | |-----------|----------------------------|-------------------------------------------|------------------------------------|-------------------------------|-------------------------------------------------------------------| | Early | Unstable, High | Short is reactive; Long is still immature | May fluctuate sharply | Rarely triggered | Lacks sufficient history for decision-making; effectively inactive | | Middle | Gradual Convergence | EMA pair begins forming meaningful gaps | Moderate oscillation, relatively stable | Conditionally triggered | Adaptive blending functions effectively based on state | | Late | Converged, Micro-vibration | Short ≈ Long (gap nearly vanishes) | Narrow convergence | No longer triggered | Sign of stability; ready to trigger `should_stop` | Notes: - The scalar value is always computed as tanh(5 × (short - long)) - Thresholds: - If |scalar| > 0.3, blending is initiated - If |scalar| > 0.6, blending ratio becomes large (≥ 0.7) - Shadow blending does not overwrite parameters but applies partial integration for gradual alignment - Scalar decay corresponds to learning "quieting," preparing for should_stop condition in the final phas ## 成果: 前述の感情機構の調整により、過剰な反応を抑制しノイズ耐性を上げることで、ベクトルの乱れ等も抑え進行方向を正しい向きに調整します、 正しいベクトルで進むことで学習は安定し収束へと最短で向かいます、感情機構による働きは学習後半のノイズ等を修正する仕上げを早くスムーズに完了できます。 また学習率や勾配やさまざまなパラメーターを保持せずに"今"を観察するだけで更新され続けることで、 途中終了、収束後の再学習、積層学習、等のときも現在値のみで学習継続を可能とします、 これは既存のoptimizerのような過去値を保存する手間を省きつつも新しく得られた利点でもあります。 ## Results The adjustments introduced by the Emotion Mechanism suppress excessive reactions and enhance noise tolerance, thereby reducing vector fluctuations and helping align the learning direction more accurately. By following the correct vector, learning proceeds more stably and reaches convergence in minimal time. The role of the Emotion Mechanism becomes especially apparent in the latter stages of training, where it effectively and smoothly corrects residual noise and instability. Moreover, since the optimizer continuously updates its parameters by observing only the current state—without retaining learning rates, gradients, or other historical parameters—it supports learning continuation in scenarios such as mid-training interruptions, retraining after convergence, and stacked learning. This capability not only eliminates the need to store past values like traditional optimizers but also introduces a new level of flexibility and simplicity. ## 結論: 生物のもつニューロンが一定の閾値を超えて初めて信号を発火させるように、EmoNAVIでも"感情振幅"を検出し行動(shadow混合)を起こします。 前述のとおり"感情機構"は一定閾値の超過時のみ動作します、ここはまさにニューロンスパイク的な動きといえるのではないでしょうか。 EmoNAVIの持つ"感情機構"は、そうした生物的反応に似ており、技術的な制御と生理的直感の融合点だろうと思います。 ## Conclusion Just as biological neurons fire only when a certain threshold is exceeded, EmoNAVI detects "emotional amplitude" and triggers an action—specifically, shadow blending. As described earlier, the Emotion Mechanism activates only when this amplitude crosses a predefined threshold. This behavior closely resembles neuronal spiking and can be seen as a biologically inspired response. We believe that EmoNAVI’s Emotion Mechanism represents a unique fusion of technical control and physiological intuition—bringing together algorithmic design and life-like reactivity. ## 展開: この"感情機構"の仕組みはVAE等を含むoptimizer以外にも簡単に応用可能だろうと思います、 それらの発展に少しでも寄与することができれば、AIとの未来を想像して、これほど嬉しいことはありません。 ぜひこの"感情機構"を応用しAIの発展への道筋を共に歩んでください。 ## Expansion The Emotion Mechanism described here is highly adaptable and can be easily applied beyond optimizers—including use cases such as variational autoencoders (VAEs) and other architectures. If this approach can contribute, even in a small way, to the advancement of such systems, we would be honored to be part of imagining a future together with AI. We warmly invite you to explore the application of this Emotion Mechanism and walk alongside us on the path toward advancing intelligent systems. ## 技術: EMAベースのスカラー判断とshadow混合の構造 ## Technology Structure of EMA-Based Scalar Evaluation and Shadow Blending ``` +------------+ +------------+ | Loss(t) | | Loss(t) | +-----+------+ +-----+------+ | | ┌─────────▼─────────┐ ┌─────────▼─────────┐ │ Short EMA │ │ Long EMA │ │ (weight = 0.3) │ │ (weight = 0.01) │ └─────────┬─────────┘ └─────────┬─────────┘ │ │ └────────────┬────────────────┘ ▼ +-------------------+ | 差分 (short - long) | +-------------------+ │ ▼ +------------------+ | tanh(5 × diff) | ← 感情スカラー生成 +--------+---------+ │ [ if |scalar| > threshold ] 判定 │ +--------▼--------+ | Shadow比率決定 | +--------+--------+ │ +--------▼--------+ | Shadow混合補正 | ← 過去情報を追従的にブレンド +------------------+ ``` ## 付録: EmoNAVIのグラフへのリンク Measured with LR of 1e-4 / それぞれ 1e-4 のLRにて測定 ![graph00](https://github.com/muooon/EmoNavi/blob/main/emonavi-test00.png) ![graph01](https://github.com/muooon/EmoNavi/blob/main/emonavi-test01.png) ![graph02](https://github.com/muooon/EmoNavi/blob/main/emonavi-test02.png) Have fun learning about EmoNAVI's philosophy and how it works https://huggingface.co/muooon/EmoNAVI/blob/main/emonavi-inner-workings(ENG).txt EmoNAVIの考え方、その仕組みについて楽しく知る https://huggingface.co/muooon/EmoNAVI/blob/main/emonavi-inner-workings(JPN).txt ## 経緯: 現状の強化学習などを見ていていくつかの疑問に出会いました、 日本の著名な漫画家、手塚治虫氏の描いた未来社会、それに憧れ羨望した少年時代を思い返すと、 人類のパートナーになるべきAIについて他のアプローチを模索したくなりました、 今回の提案はそのアプローチによるひとつの結果です ## Background While observing the current state of reinforcement learning and related fields, I encountered several fundamental questions. Reflecting on my childhood—when I admired and longed for the future societies envisioned by the legendary Japanese manga artist Osamu Tezuka— I felt compelled to explore alternative approaches to how AI might serve as a true partner to humanity. This proposal represents one such result born from that aspiration. ## 謝意: Acknowledgements Emoシリーズは、Adam、Adafactor、Lion、Tiger、等から多くを学びました。 これらの後継ではなく独自の思想や設計による"感情機構"というアプローチにより構築されています。 汎用性・自律性・適応性を重視し新たな最適化や効率化や簡易化を追求しています。 この開発において先人たちの知見に深く感謝しつつ今後も新しい可能性を探究します。 The Emo series has learned much from Adam, Adafactor, Lion, and Tiger. Rather than being their successors, it is built upon a unique philosophy and design approach centered on "emotional mechanisms". It prioritizes generality, autonomy, and adaptability in pursuit of new paths for optimization, efficiency, and simplicity. In its development, we deeply appreciate the insights of those who came before us—and continue to explore new possibilities beyond them. これまでAIの発展に寄与されたすべての方、これから貢献するすべての方へ感謝します、 このプロジェクト完成を支え続けてくれた Copilotさんに、ありがとう。 We extend our heartfelt gratitude to all those who have contributed—and will continue to contribute—to the advancement of AI. Special thanks to Copilot for its unwavering support throughout t
corzamennav/blockassist-bc-territorial_wild_antelope_1758755510
corzamennav
2025-09-24T23:12:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-24T23:12:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hayangSKEL/blockassist
hayangSKEL
2025-09-24T23:11:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-09-12T11:24:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
yufeng1/OpenThinker-7B-reasoning-lora-merged-OT-hard-type1-e3-2
yufeng1
2025-09-24T23:09:50Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T23:09:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mahofmahof/Qwen3-0.6B-Gensyn-Swarm-eager_freckled_seal
mahofmahof
2025-09-24T23:09:07Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am eager_freckled_seal", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T08:37:39Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am eager_freckled_seal --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tomlobato/markuplm-prop-ner
tomlobato
2025-09-24T23:08:03Z
22
0
transformers
[ "transformers", "safetensors", "markuplm", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-27T03:47:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HectorHe/DeepSeek-V2-Lite-aux-free-sft-commonsense-1epoch-1e-5-gamma
HectorHe
2025-09-24T23:03:06Z
0
0
transformers
[ "transformers", "safetensors", "deepseek_v2", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "custom_code", "dataset:fw407/Commonsense-15K", "base_model:deepseek-ai/DeepSeek-V2-Lite", "base_model:finetune:deepseek-ai/DeepSeek-V2-Lite", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T20:50:20Z
--- base_model: deepseek-ai/DeepSeek-V2-Lite datasets: fw407/Commonsense-15K library_name: transformers model_name: DeepSeek-V2-Lite-aux-free-sft-commonsense-1epoch-1e-5-gamma tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for DeepSeek-V2-Lite-aux-free-sft-commonsense-1epoch-1e-5-gamma This model is a fine-tuned version of [deepseek-ai/DeepSeek-V2-Lite](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite) on the [fw407/Commonsense-15K](https://huggingface.co/datasets/fw407/Commonsense-15K) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="HectorHe/DeepSeek-V2-Lite-aux-free-sft-commonsense-1epoch-1e-5-gamma", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hector_-carnegie-mellon-university/huggingface/runs/hxk8kid8) This model was trained with SFT. ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.51.0 - Pytorch: 2.6.0 - Datasets: 4.1.1 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
pranavnaik98/tiago-open-door_act_policy_gazebo
pranavnaik98
2025-09-24T22:59:19Z
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:pranavnaik98/tiago-gazebo-act-30", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-09-24T22:58:37Z
--- datasets: pranavnaik98/tiago-gazebo-act-30 library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - act - lerobot - robotics --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
ImSakushi/maille2
ImSakushi
2025-09-24T22:55:52Z
1
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-09-05T01:30:21Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: NameBazillais --- # Maille2 <!-- <Gallery /> --> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `NameBazillais` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('ImSakushi/maille2', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
corzamennav/blockassist-bc-territorial_wild_antelope_1758754415
corzamennav
2025-09-24T22:54:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-24T22:54:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ypszn/blockassist
ypszn
2025-09-24T22:54:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping pawing worm", "arxiv:2504.07091", "region:us" ]
null
2025-09-12T21:48:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping pawing worm --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Mohawad1/whisper-small-unsloth-egy-finetuned-full-v3
Mohawad1
2025-09-24T22:53:31Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-24T13:27:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
noobmaster6009/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-invisible_raging_lemur
noobmaster6009
2025-09-24T22:51:10Z
67
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am invisible_raging_lemur", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-20T11:09:20Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am invisible_raging_lemur --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
WinstonHu/pure_model_weights
WinstonHu
2025-09-24T22:50:36Z
0
0
pytorch
[ "pytorch", "digital-pathology", "wsi", "llava", "license:apache-2.0", "region:us" ]
null
2025-09-20T21:18:10Z
--- library_name: "pytorch" tags: - digital-pathology - wsi - llava license: apache-2.0 --- # Efficient Foundation WSI-LLaVA (Stage 2) This repo hosts a Stage-2 checkpoint folder. **Folder** - `stage_1/token_merge_plus_resampler/stage1_qwen25_both_hf` (uploaded from `/mnt/bn/yuxuanwang/meilong/code/projects/efficient_foundation_wsi_llava/checkpoints/stage_1/token_merge_plus_resampler/stage1_qwen25_both_hf`) > Owner: @WinstonHu
Guilherme34/AGI-405B-forgetaboutthismodel
Guilherme34
2025-09-24T22:48:31Z
905
3
null
[ "safetensors", "llama", "region:us" ]
null
2025-09-03T18:03:06Z
THE MODEL IS NOT FINISHED AND IS NOT GOOD, ALOT OF PROBLEMS HAPPENED IN THE MERGING PROCESS
timm/vit_small_plus_patch16_dinov3_qkvb.lvd1689m
timm
2025-09-24T22:45:06Z
33
0
timm
[ "timm", "pytorch", "safetensors", "image-feature-extraction", "transformers", "dataset:lvd-1689m", "arxiv:2508.10104", "arxiv:2010.11929", "license:other", "region:us" ]
image-feature-extraction
2025-09-17T16:40:24Z
--- tags: - image-feature-extraction - timm - transformers pipeline_tag: image-feature-extraction library_name: timm license: other license_name: dinov3-license license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license datasets: - lvd-1689m --- # Model card for vit_small_plus_patch16_dinov3_qkvb.lvd1689m A DINOv3 ViT model image feature encoder. Distilled on LVD-1689M from the DINOv3 ViT-7B model. ## Model Notes * The original model weights ended up with all QKV projection biases being zeroes. For `timm`, have disabled the QKV bias (`qkv_bias=False`) for the models and not loaded the zero weights. For some model sizes there are variants with `qkvb` in the name that have the bias enabled (`qkv_bias=True`), but zero, to match the behaviour of `transformers` and original models. * The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs. ## Model Details - **Model Type:** Image Feature Encoder - **Model Stats:** - Params (M): 28.7 - GMACs: 8.1 - Activations (M): 21.8 - Image size: 256 x 256 - **Original:** https://github.com/facebookresearch/dinov3 - **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license) - **Dataset:** LVD-1689M - **Papers:** - DINOv3: https://arxiv.org/abs/2508.10104 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_small_plus_patch16_dinov3_qkvb.lvd1689m', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_small_plus_patch16_dinov3_qkvb.lvd1689m', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 384, 16, 16]) # torch.Size([1, 384, 16, 16]) # torch.Size([1, 384, 16, 16]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_small_plus_patch16_dinov3_qkvb.lvd1689m', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 261, 384) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison See the associated paper for details on the evaluation protocols ### Results for ViT backbones pretrained (or distilled) on web (LVD-1689M) | Model | IN-ReaL | IN-R | Obj.Net | Ox.-H | ADE20k | NYU↓ | DAVIS | NAVI | SPair | |-------|---------|------|---------|-------|--------|------|-------|------|-------| | **Global Tasks** | | | | | **Dense Tasks** | | | | | | DINOv3 ViT-S/16 | 87.0 | 60.4 | 50.9 | 49.5 | 47.0 | 0.403 | 72.7 | 56.3 | 50.4 | | DINOv3 ViT-S+/16 | 88.0 | 68.8 | 54.6 | 50.0 | 48.8 | 0.399 | 75.5 | 57.1 | 55.2 | | DINOv3 ViT-B/16 | 89.3 | 76.7 | 64.1 | 58.5 | 51.8 | 0.373 | 77.2 | 58.8 | 57.2 | | DINOv3 ViT-L/16 | 90.2 | 88.1 | 74.8 | 63.1 | 54.9 | 0.352 | 79.9 | 62.3 | 61.3 | | DINOv3 ViT-H+/16 | 90.3 | 90.0 | 78.6 | 64.5 | 54.8 | 0.352 | 79.3 | 63.3 | 56.3 | | DINOv3 ViT-7B/16 | 90.4 | 91.1 | 91.1 | 72.8 | 55.9 | 0.309 | 79.7 | 64.4 | 58.7 | ### Results for ConvNeXt backbones distilled on web (LVD-1689M) | Model | IN-ReaL @256px | IN-ReaL @512px | IN-R @256px | IN-R @512px | Obj.Net @256px | Obj.Net @512px | ADE20k | NYU↓ | |-------|----------------|----------------|-------------|-------------|----------------|----------------|--------|------| | **Global Tasks** | | | | | | | **Dense Tasks** | | | DINOv3 ConvNeXt Tiny | 86.6 | 87.7 | 73.7 | 74.1 | 52.6 | 58.7 | 42.7 | 0.448 | | DINOv3 ConvNeXt Small | 87.9 | 88.7 | 73.7 | 74.1 | 52.6 | 58.7 | 44.8 | 0.432 | | DINOv3 ConvNeXt Base | 88.5 | 89.2 | 77.2 | 78.2 | 56.2 | 61.3 | 46.3 | 0.420 | | DINOv3 ConvNeXt Large | 88.9 | 89.4 | 81.3 | 82.4 | 59.3 | 65.2 | 47.8 | 0.403 | ### Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M) #### (GEO-Bench) Classification | Model | m-BEnet | m-brick-kiln | m-eurosat | m-forestnet | m-pv4ger | m-so2sat | mean | |-------|---------|--------------|-----------|-------------|----------|----------|------| | DINOv3 ViT-L/16 | 73.0 | 96.5 | 94.1 | 60.6 | 96.0 | 57.4 | 79.6 | | DINOv3 ViT-7B/16 | 74.0 | 97.2 | 94.8 | 62.3 | 96.1 | 62.1 | 81.1 | #### (GEO-Bench) Segmentation | Model | m-cashew | m-chesapeake | m-NeonTree | m-nz-cattle | m-pv4ger-seg | m-SA-crop | mean | |-------|----------|--------------|------------|-------------|--------------|-----------|------| | DINOv3 ViT-L/16 | 94.2 | 75.6 | 61.8 | 83.7 | 95.2 | 36.8 | 74.5 | | DINOv3 ViT-7B/16 | 94.1 | 76.6 | 62.6 | 83.4 | 95.5 | 37.6 | 75.0 | ## Citation ```bibtex @article{simeoni2025dinov3, title={DINOv3}, author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others}, journal={arXiv preprint arXiv:2508.10104}, year={2025} } } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
timm/vit_large_patch16_dinov3.sat493m
timm
2025-09-24T22:44:16Z
100
0
timm
[ "timm", "pytorch", "safetensors", "image-feature-extraction", "transformers", "dataset:sat-493m", "arxiv:2508.10104", "arxiv:2010.11929", "license:other", "region:us" ]
image-feature-extraction
2025-09-17T16:37:41Z
--- tags: - image-feature-extraction - timm - transformers pipeline_tag: image-feature-extraction library_name: timm license: other license_name: dinov3-license license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license datasets: - sat-493m --- # Model card for vit_large_patch16_dinov3.sat493m A DINOv3 ViT model image feature encoder. Distilled on SAT-493M from the DINOv3 ViT-7B model. ## Model Notes * The original model weights ended up with all QKV projection biases being zeroes. For `timm`, have disabled the QKV bias (`qkv_bias=False`) for the models and not loaded the zero weights. For some model sizes there are variants with `qkvb` in the name that have the bias enabled (`qkv_bias=True`), but zero, to match the behaviour of `transformers` and original models. * The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs. ## Model Details - **Model Type:** Image Feature Encoder - **Model Stats:** - Params (M): 303.1 - GMACs: 82.4 - Activations (M): 90.6 - Image size: 256 x 256 - **Original:** https://github.com/facebookresearch/dinov3 - **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license) - **Dataset:** SAT-493M - **Papers:** - DINOv3: https://arxiv.org/abs/2508.10104 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_large_patch16_dinov3.sat493m', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_large_patch16_dinov3.sat493m', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 1024, 16, 16]) # torch.Size([1, 1024, 16, 16]) # torch.Size([1, 1024, 16, 16]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_large_patch16_dinov3.sat493m', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 261, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison See the associated paper for details on the evaluation protocols ### Results for ViT backbones pretrained (or distilled) on web (LVD-1689M) | Model | IN-ReaL | IN-R | Obj.Net | Ox.-H | ADE20k | NYU↓ | DAVIS | NAVI | SPair | |-------|---------|------|---------|-------|--------|------|-------|------|-------| | **Global Tasks** | | | | | **Dense Tasks** | | | | | | DINOv3 ViT-S/16 | 87.0 | 60.4 | 50.9 | 49.5 | 47.0 | 0.403 | 72.7 | 56.3 | 50.4 | | DINOv3 ViT-S+/16 | 88.0 | 68.8 | 54.6 | 50.0 | 48.8 | 0.399 | 75.5 | 57.1 | 55.2 | | DINOv3 ViT-B/16 | 89.3 | 76.7 | 64.1 | 58.5 | 51.8 | 0.373 | 77.2 | 58.8 | 57.2 | | DINOv3 ViT-L/16 | 90.2 | 88.1 | 74.8 | 63.1 | 54.9 | 0.352 | 79.9 | 62.3 | 61.3 | | DINOv3 ViT-H+/16 | 90.3 | 90.0 | 78.6 | 64.5 | 54.8 | 0.352 | 79.3 | 63.3 | 56.3 | | DINOv3 ViT-7B/16 | 90.4 | 91.1 | 91.1 | 72.8 | 55.9 | 0.309 | 79.7 | 64.4 | 58.7 | ### Results for ConvNeXt backbones distilled on web (LVD-1689M) | Model | IN-ReaL @256px | IN-ReaL @512px | IN-R @256px | IN-R @512px | Obj.Net @256px | Obj.Net @512px | ADE20k | NYU↓ | |-------|----------------|----------------|-------------|-------------|----------------|----------------|--------|------| | **Global Tasks** | | | | | | | **Dense Tasks** | | | DINOv3 ConvNeXt Tiny | 86.6 | 87.7 | 73.7 | 74.1 | 52.6 | 58.7 | 42.7 | 0.448 | | DINOv3 ConvNeXt Small | 87.9 | 88.7 | 73.7 | 74.1 | 52.6 | 58.7 | 44.8 | 0.432 | | DINOv3 ConvNeXt Base | 88.5 | 89.2 | 77.2 | 78.2 | 56.2 | 61.3 | 46.3 | 0.420 | | DINOv3 ConvNeXt Large | 88.9 | 89.4 | 81.3 | 82.4 | 59.3 | 65.2 | 47.8 | 0.403 | ### Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M) #### (GEO-Bench) Classification | Model | m-BEnet | m-brick-kiln | m-eurosat | m-forestnet | m-pv4ger | m-so2sat | mean | |-------|---------|--------------|-----------|-------------|----------|----------|------| | DINOv3 ViT-L/16 | 73.0 | 96.5 | 94.1 | 60.6 | 96.0 | 57.4 | 79.6 | | DINOv3 ViT-7B/16 | 74.0 | 97.2 | 94.8 | 62.3 | 96.1 | 62.1 | 81.1 | #### (GEO-Bench) Segmentation | Model | m-cashew | m-chesapeake | m-NeonTree | m-nz-cattle | m-pv4ger-seg | m-SA-crop | mean | |-------|----------|--------------|------------|-------------|--------------|-----------|------| | DINOv3 ViT-L/16 | 94.2 | 75.6 | 61.8 | 83.7 | 95.2 | 36.8 | 74.5 | | DINOv3 ViT-7B/16 | 94.1 | 76.6 | 62.6 | 83.4 | 95.5 | 37.6 | 75.0 | ## Citation ```bibtex @article{simeoni2025dinov3, title={DINOv3}, author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others}, journal={arXiv preprint arXiv:2508.10104}, year={2025} } } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
zeekay/supra-nexus-o1-thinking
zeekay
2025-09-24T22:43:16Z
0
0
transformers
[ "transformers", "text-generation", "reasoning", "supra-nexus", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T22:43:15Z
--- license: apache-2.0 language: - en tags: - text-generation - reasoning - supra-nexus library_name: transformers --- # Supra Nexus o1 Thinking Advanced reasoning model with transparent thought processes developed by Supra Foundation LLC. ## Model Details - **Base**: Qwen3-4B-Thinking-2507 - **Parameters**: 4B - **Training**: LoRA fine-tuning with MLX - **Organization**: Supra Foundation LLC ## Features - Chain-of-thought reasoning with <thinking> tags - Transparent decision-making process - Advanced problem-solving capabilities - Production-ready performance ## Usage Version: ImageMagick 7.1.2-3 Q16-HDRI aarch64 23340 https://imagemagick.org Copyright: (C) 1999 ImageMagick Studio LLC License: https://imagemagick.org/script/license.php Features: Cipher DPC HDRI Modules OpenMP Delegates (built-in): bzlib fontconfig freetype heic jng jp2 jpeg jxl lcms lqr ltdl lzma openexr png raw tiff webp xml zlib zstd Compiler: clang (17.0.0) Usage: import [options ...] [ file ] Image Settings: -adjoin join images into a single multi-image file -border include window border in the output image -channel type apply option to select image channels -colorspace type alternate image colorspace -comment string annotate image with comment -compress type type of pixel compression when writing the image -define format:option define one or more image format options -density geometry horizontal and vertical density of the image -depth value image depth -descend obtain image by descending window hierarchy -display server X server to contact -dispose method layer disposal method -dither method apply error diffusion to image -delay value display the next image after pausing -encipher filename convert plain pixels to cipher pixels -endian type endianness (MSB or LSB) of the image -encoding type text encoding type -filter type use this filter when resizing an image -format "string" output formatted image characteristics -frame include window manager frame -gravity direction which direction to gravitate towards -identify identify the format and characteristics of the image -interlace type None, Line, Plane, or Partition -interpolate method pixel color interpolation method -label string assign a label to an image -limit type value Area, Disk, Map, or Memory resource limit -monitor monitor progress -page geometry size and location of an image canvas -pause seconds seconds delay between snapshots -pointsize value font point size -quality value JPEG/MIFF/PNG compression level -quiet suppress all warning messages -regard-warnings pay attention to warning messages -repage geometry size and location of an image canvas -respect-parentheses settings remain in effect until parenthesis boundary -sampling-factor geometry horizontal and vertical sampling factor -scene value image scene number -screen select image from root window -seed value seed a new sequence of pseudo-random numbers -set property value set an image property -silent operate silently, i.e. don't ring any bells -snaps value number of screen snapshots -support factor resize support: > 1.0 is blurry, < 1.0 is sharp -synchronize synchronize image to storage device -taint declare the image as modified -transparent-color color transparent color -treedepth value color tree depth -verbose print detailed information about the image -virtual-pixel method Constant, Edge, Mirror, or Tile -window id select window with this id or name root selects whole screen Image Operators: -annotate geometry text annotate the image with text -colors value preferred number of colors in the image -crop geometry preferred size and location of the cropped image -encipher filename convert plain pixels to cipher pixels -extent geometry set the image size -geometry geometry preferred size or location of the image -help print program options -monochrome transform image to black and white -negate replace every pixel with its complementary color -quantize colorspace reduce colors in this colorspace -resize geometry resize the image -rotate degrees apply Paeth rotation to the image -strip strip image of all profiles and comments -thumbnail geometry create a thumbnail of the image -transparent color make this color transparent within the image -trim trim image edges -type type image type Miscellaneous Options: -debug events display copious debugging information -help print program options -list type print a list of supported option arguments -log format format of debugging information -version print version information By default, 'file' is written in the MIFF image format. To specify a particular image format, precede the filename with an image format name and a colon (i.e. ps:image) or specify the image type as the filename suffix (i.e. image.ps). Specify 'file' as '-' for standard input or output.
JesseHuang922/lv_roberta_base
JesseHuang922
2025-09-24T22:42:53Z
0
0
null
[ "lv", "dataset:universal_dependencies", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:cc-by-sa-4.0", "region:us" ]
null
2025-09-24T21:59:20Z
--- language: - lv base_model: - FacebookAI/xlm-roberta-base license: cc-by-sa-4.0 datasets: - universal_dependencies metrics: - accuracy - uas - las --- # Latvian SpaCy Model: lv_roberta_base ## Github Repo: https://github.com/LazyBomb-SIA/LV_RoBERTa_Base --- ## Overview This is a **spaCy transformer-based pipeline for Latvian**, built with the **XLM-RoBERTa-base backbone**. **Performance Comparison** | Model | POS | Tag | Morph | UAS | LAS | Lemma Acc | Summary (equal weights) | | ------------ | ------ | ------ | ------ | ------ | ------ | --------- | ------ | | spaCy (this model) | 0.9748 | 0.9215 | 0.9550 | 0.9104 | 0.8753 | 0.8203 | 90.96% | | Stanza | 0.9688 | 0.8987 | 0.9449 | 0.8791 | 0.8354 | 0.9539 | 91.35% | | UDPipe | 0.9207 | 0.7960 | 0.3403 | 0.0791 | 0.0660 | 0.8911 | 51.55% | Details please check cell 12 and 13 here: https://github.com/LazyBomb-SIA/LV_RoBERTa_Base/blob/main/lv_roberta_base.ipynb It includes the following components: - **Transformer** (XLM-RoBERTa-base) - **Tagger** - **Morphologizer** - **Parser** - **Sentence Segmenter (senter)** - **Lemmatizer** - (Note: Transformer component internally uses a `tok2vec` listener) **Model type:** Transformer pipeline (XLM-RoBERTa-base backbone) **Language:** Latvian (lv) **Recommended hardware:** CPU for small-scale use, GPU recommended for faster inference. --- ## Training Data The model was trained on the **Latvian UD Treebank v2.16**, which is derived from the **Latvian Treebank (LVTB)** created at the University of Latvia, Institute of Mathematics and Computer Science, Artificial Intelligence Laboratory (AI Lab). - **Dataset source:** [UD Latvian LVTB](https://github.com/UniversalDependencies/UD_Latvian-LVTB) - **License:** [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) - **Data splits:** - Train: 15,055 sentences - Dev: 2,080 sentences - Test: 2,396 sentences --- ## Acknowledgements - Thanks to the **University of Latvia, AI Lab**, and all contributors of the **Latvian UD Treebank**. - Model development supported by [LazyBomb.SIA]. - Inspired by the **spaCy ecosystem** and training framework. - The Latvian UD Treebank was developed with support from multiple grants, including: - European Regional Development Fund (Grant No. 1.1.1.1/16/A/219, 1.1.1.2/VIAA/1/16/188) - State Research Programme "National Identity" - State Research Programme "Digital Resources for the Humanities" (Grant No. VPP-IZM-DH-2020/1-0001) - State Research Programme "Research on Modern Latvian Language and Development of Language Technology" (Grant No. VPP-LETONIKA-2021/1-0006) --- ## Special Thanks Special Thanks to all contributors who participated in the Beta test and espically those who provided valuable feedback **The list is waiting** --- ## License This model is released under the [Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/). You are free to: - **Share** — copy and redistribute the material in any medium or format, for any purpose, even commercially. - **Adapt** — remix, transform, and build upon the material for any purpose, even commercially. Under the following terms: - **Attribution** — You must give appropriate credit, provide a link to the license, and indicate if changes were made. - **ShareAlike** — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. --- ## References - Pretkalniņa, L., Rituma, L., Saulīte, B., et al. (2016–2025). Universal Dependencies Latvian Treebank (LVTB). - Grūzītis, N., Znotiņš, A., Nešpore-Bērzkalne, G., Paikens, P., et al. (2018). Creation of a Balanced State-of-the-Art Multilayer Corpus for NLU. *LREC 2018*. - Pretkalniņa, L., Rituma, L., Saulīte, B. (2016). Universal Dependency Treebank for Latvian: A Pilot. *Baltic Perspective Workshop*. --- --- ## Usage You can either: 1. **Download the model directly from the Hugging Face Hub** Using `huggingface_hub.snapshot_download`, the model files will be automatically fetched and cached locally. ```python import spacy from huggingface_hub import snapshot_download # Load the pipeline model_dir = snapshot_download(repo_id="JesseHuang922/lv_roberta_base", repo_type="model") nlp = spacy.load(model_dir) ``` 2. **Install from the pre-built wheel package** Download the wheel file (**lv_roberta_base-1.0.0-py3-none-any.whl**) and install it into your virtual environment with: ```bash pip install lv_roberta_base-1.0.0-py3-none-any.whl --- ## Dependencies The following Python packages are required to run the Latvian XLM-RoBERTa spaCy pipeline: | Package | Minimum Version | Notes | | ---------------------- | --------------- | -------------------------------------------------------------------------------------- | | **spaCy** | 3.8.7 | Main NLP framework | | **spacy-transformers** | 1.3.9 | Integrates spaCy with Hugging Face Transformers | | **transformers** | 4.49.0 | Hugging Face Transformers library | | **torch** | 2.8.0 | PyTorch backend for transformers | | **tokenizers** | 0.21.4 | Fast tokenizer support | | **safetensors** | 0.6.2 | Secure tensor storage for transformer weights | | **huggingface-hub** | 0.34.4 | Download and manage the model files from the Hugging Face Hub | ## Optional but recommended | Package | Minimum Version | Notes | | ---------------------- | --------------- | -------------------------------------------------------------------------------------- | | **hf-xet** | 1.1.10 | if you need to download or upload large files from the Hugging Face Hub and use the Xet storage backend | ## Download all dependencies with just one command line: ```bash pip install \ spacy>=3.8.7 \ spacy-transformers>=1.3.9 \ transformers>=4.49.0 \ torch>=2.8.0 \ tokenizers>=0.21.4 \ safetensors>=0.6.2 \ huggingface-hub>=0.34.4 \ hf-xet>=1.1.10 ``` ## Example Code ```python import spacy import numpy as np from huggingface_hub import snapshot_download # Load the pipeline model_dir = snapshot_download(repo_id="JesseHuang922/lv_roberta_base", repo_type="model") nlp = spacy.load(model_dir) # Example text text = """Baltijas jūras nosaukums ir devis nosaukumu baltu valodām un Baltijas valstīm. Terminu "Baltijas jūra" (Mare Balticum) pirmoreiz lietoja vācu hronists Brēmenes Ādams 11. gadsimtā.""" # Process text doc = nlp(text) # ------------------------ # Tokenization # ------------------------ print("Tokens:") print([token.text for token in doc]) # ------------------------ # Lemmatization # ------------------------ print("Lemmas:") print([token.lemma_ for token in doc]) # ------------------------ # Part-of-Speech Tagging # ------------------------ print("POS tags:") for token in doc: print(f"{token.text}: {token.pos_} ({token.tag_})") # ------------------------ # Morphological Features # ------------------------ print("Morphological features:") for token in doc: print(f"{token.text}: {token.morph}") # ------------------------ # Dependency Parsing # ------------------------ print("Dependency parsing:") for token in doc: print(f"{token.text} <--{token.dep_}-- {token.head.text}") # ------------------------ # Sentence Segmentation # ------------------------ print("Sentences:") for sent in doc.sents: print(sent.text) # ------------------------ # Check Pipeline Components # ------------------------ print("Pipeline components:") print(nlp.pipe_names) # Transformer vectors vectors = np.vstack([token.vector for token in doc]) print("Token vectors shape:", vectors.shape)
timm/vit_base_patch16_dinov3.lvd1689m
timm
2025-09-24T22:41:56Z
799
0
timm
[ "timm", "pytorch", "safetensors", "image-feature-extraction", "transformers", "dataset:lvd-1689m", "arxiv:2508.10104", "arxiv:2010.11929", "license:other", "region:us" ]
image-feature-extraction
2025-09-17T16:31:36Z
--- tags: - image-feature-extraction - timm - transformers pipeline_tag: image-feature-extraction library_name: timm license: other license_name: dinov3-license license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license datasets: - lvd-1689m --- # Model card for vit_base_patch16_dinov3.lvd1689m A DINOv3 ViT model image feature encoder. Distilled on LVD-1689M from the DINOv3 ViT-7B model. ## Model Notes * The original model weights ended up with all QKV projection biases being zeroes. For `timm`, have disabled the QKV bias (`qkv_bias=False`) for the models and not loaded the zero weights. For some model sizes there are variants with `qkvb` in the name that have the bias enabled (`qkv_bias=True`), but zero, to match the behaviour of `transformers` and original models. * The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs. ## Model Details - **Model Type:** Image Feature Encoder - **Model Stats:** - Params (M): 85.6 - GMACs: 23.6 - Activations (M): 34.1 - Image size: 256 x 256 - **Original:** https://github.com/facebookresearch/dinov3 - **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license) - **Dataset:** LVD-1689M - **Papers:** - DINOv3: https://arxiv.org/abs/2508.10104 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_base_patch16_dinov3.lvd1689m', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_base_patch16_dinov3.lvd1689m', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 768, 16, 16]) # torch.Size([1, 768, 16, 16]) # torch.Size([1, 768, 16, 16]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_base_patch16_dinov3.lvd1689m', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 261, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison See the associated paper for details on the evaluation protocols ### Results for ViT backbones pretrained (or distilled) on web (LVD-1689M) | Model | IN-ReaL | IN-R | Obj.Net | Ox.-H | ADE20k | NYU↓ | DAVIS | NAVI | SPair | |-------|---------|------|---------|-------|--------|------|-------|------|-------| | **Global Tasks** | | | | | **Dense Tasks** | | | | | | DINOv3 ViT-S/16 | 87.0 | 60.4 | 50.9 | 49.5 | 47.0 | 0.403 | 72.7 | 56.3 | 50.4 | | DINOv3 ViT-S+/16 | 88.0 | 68.8 | 54.6 | 50.0 | 48.8 | 0.399 | 75.5 | 57.1 | 55.2 | | DINOv3 ViT-B/16 | 89.3 | 76.7 | 64.1 | 58.5 | 51.8 | 0.373 | 77.2 | 58.8 | 57.2 | | DINOv3 ViT-L/16 | 90.2 | 88.1 | 74.8 | 63.1 | 54.9 | 0.352 | 79.9 | 62.3 | 61.3 | | DINOv3 ViT-H+/16 | 90.3 | 90.0 | 78.6 | 64.5 | 54.8 | 0.352 | 79.3 | 63.3 | 56.3 | | DINOv3 ViT-7B/16 | 90.4 | 91.1 | 91.1 | 72.8 | 55.9 | 0.309 | 79.7 | 64.4 | 58.7 | ### Results for ConvNeXt backbones distilled on web (LVD-1689M) | Model | IN-ReaL @256px | IN-ReaL @512px | IN-R @256px | IN-R @512px | Obj.Net @256px | Obj.Net @512px | ADE20k | NYU↓ | |-------|----------------|----------------|-------------|-------------|----------------|----------------|--------|------| | **Global Tasks** | | | | | | | **Dense Tasks** | | | DINOv3 ConvNeXt Tiny | 86.6 | 87.7 | 73.7 | 74.1 | 52.6 | 58.7 | 42.7 | 0.448 | | DINOv3 ConvNeXt Small | 87.9 | 88.7 | 73.7 | 74.1 | 52.6 | 58.7 | 44.8 | 0.432 | | DINOv3 ConvNeXt Base | 88.5 | 89.2 | 77.2 | 78.2 | 56.2 | 61.3 | 46.3 | 0.420 | | DINOv3 ConvNeXt Large | 88.9 | 89.4 | 81.3 | 82.4 | 59.3 | 65.2 | 47.8 | 0.403 | ### Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M) #### (GEO-Bench) Classification | Model | m-BEnet | m-brick-kiln | m-eurosat | m-forestnet | m-pv4ger | m-so2sat | mean | |-------|---------|--------------|-----------|-------------|----------|----------|------| | DINOv3 ViT-L/16 | 73.0 | 96.5 | 94.1 | 60.6 | 96.0 | 57.4 | 79.6 | | DINOv3 ViT-7B/16 | 74.0 | 97.2 | 94.8 | 62.3 | 96.1 | 62.1 | 81.1 | #### (GEO-Bench) Segmentation | Model | m-cashew | m-chesapeake | m-NeonTree | m-nz-cattle | m-pv4ger-seg | m-SA-crop | mean | |-------|----------|--------------|------------|-------------|--------------|-----------|------| | DINOv3 ViT-L/16 | 94.2 | 75.6 | 61.8 | 83.7 | 95.2 | 36.8 | 74.5 | | DINOv3 ViT-7B/16 | 94.1 | 76.6 | 62.6 | 83.4 | 95.5 | 37.6 | 75.0 | ## Citation ```bibtex @article{simeoni2025dinov3, title={DINOv3}, author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others}, journal={arXiv preprint arXiv:2508.10104}, year={2025} } } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
timm/vit_7b_patch16_dinov3.sat493m
timm
2025-09-24T22:41:48Z
84
0
timm
[ "timm", "safetensors", "image-feature-extraction", "transformers", "dataset:sat-493m", "arxiv:2508.10104", "arxiv:2010.11929", "license:other", "region:us" ]
image-feature-extraction
2025-09-17T17:15:30Z
--- tags: - image-feature-extraction - timm - transformers pipeline_tag: image-feature-extraction library_name: timm license: other license_name: dinov3-license license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license datasets: - sat-493m --- # Model card for vit_7b_patch16_dinov3.sat493m A DINOv3 ViT model image feature encoder. Pretrained on SAT-493M with self-supervised DINOv3 method. ## Model Notes * The original model weights ended up with all QKV projection biases being zeroes. For `timm`, have disabled the QKV bias (`qkv_bias=False`) for the models and not loaded the zero weights. For some model sizes there are variants with `qkvb` in the name that have the bias enabled (`qkv_bias=True`), but zero, to match the behaviour of `transformers` and original models. * The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs. ## Model Details - **Model Type:** Image Feature Encoder - **Model Stats:** - Params (M): 6716.0 - GMACs: 1775.1 - Activations (M): 515.9 - Image size: 256 x 256 - **Original:** https://github.com/facebookresearch/dinov3 - **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license) - **Dataset:** SAT-493M - **Papers:** - DINOv3: https://arxiv.org/abs/2508.10104 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_7b_patch16_dinov3.sat493m', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_7b_patch16_dinov3.sat493m', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 4096, 16, 16]) # torch.Size([1, 4096, 16, 16]) # torch.Size([1, 4096, 16, 16]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_7b_patch16_dinov3.sat493m', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 261, 4096) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison See the associated paper for details on the evaluation protocols ### Results for ViT backbones pretrained (or distilled) on web (LVD-1689M) | Model | IN-ReaL | IN-R | Obj.Net | Ox.-H | ADE20k | NYU↓ | DAVIS | NAVI | SPair | |-------|---------|------|---------|-------|--------|------|-------|------|-------| | **Global Tasks** | | | | | **Dense Tasks** | | | | | | DINOv3 ViT-S/16 | 87.0 | 60.4 | 50.9 | 49.5 | 47.0 | 0.403 | 72.7 | 56.3 | 50.4 | | DINOv3 ViT-S+/16 | 88.0 | 68.8 | 54.6 | 50.0 | 48.8 | 0.399 | 75.5 | 57.1 | 55.2 | | DINOv3 ViT-B/16 | 89.3 | 76.7 | 64.1 | 58.5 | 51.8 | 0.373 | 77.2 | 58.8 | 57.2 | | DINOv3 ViT-L/16 | 90.2 | 88.1 | 74.8 | 63.1 | 54.9 | 0.352 | 79.9 | 62.3 | 61.3 | | DINOv3 ViT-H+/16 | 90.3 | 90.0 | 78.6 | 64.5 | 54.8 | 0.352 | 79.3 | 63.3 | 56.3 | | DINOv3 ViT-7B/16 | 90.4 | 91.1 | 91.1 | 72.8 | 55.9 | 0.309 | 79.7 | 64.4 | 58.7 | ### Results for ConvNeXt backbones distilled on web (LVD-1689M) | Model | IN-ReaL @256px | IN-ReaL @512px | IN-R @256px | IN-R @512px | Obj.Net @256px | Obj.Net @512px | ADE20k | NYU↓ | |-------|----------------|----------------|-------------|-------------|----------------|----------------|--------|------| | **Global Tasks** | | | | | | | **Dense Tasks** | | | DINOv3 ConvNeXt Tiny | 86.6 | 87.7 | 73.7 | 74.1 | 52.6 | 58.7 | 42.7 | 0.448 | | DINOv3 ConvNeXt Small | 87.9 | 88.7 | 73.7 | 74.1 | 52.6 | 58.7 | 44.8 | 0.432 | | DINOv3 ConvNeXt Base | 88.5 | 89.2 | 77.2 | 78.2 | 56.2 | 61.3 | 46.3 | 0.420 | | DINOv3 ConvNeXt Large | 88.9 | 89.4 | 81.3 | 82.4 | 59.3 | 65.2 | 47.8 | 0.403 | ### Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M) #### (GEO-Bench) Classification | Model | m-BEnet | m-brick-kiln | m-eurosat | m-forestnet | m-pv4ger | m-so2sat | mean | |-------|---------|--------------|-----------|-------------|----------|----------|------| | DINOv3 ViT-L/16 | 73.0 | 96.5 | 94.1 | 60.6 | 96.0 | 57.4 | 79.6 | | DINOv3 ViT-7B/16 | 74.0 | 97.2 | 94.8 | 62.3 | 96.1 | 62.1 | 81.1 | #### (GEO-Bench) Segmentation | Model | m-cashew | m-chesapeake | m-NeonTree | m-nz-cattle | m-pv4ger-seg | m-SA-crop | mean | |-------|----------|--------------|------------|-------------|--------------|-----------|------| | DINOv3 ViT-L/16 | 94.2 | 75.6 | 61.8 | 83.7 | 95.2 | 36.8 | 74.5 | | DINOv3 ViT-7B/16 | 94.1 | 76.6 | 62.6 | 83.4 | 95.5 | 37.6 | 75.0 | ## Citation ```bibtex @article{simeoni2025dinov3, title={DINOv3}, author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others}, journal={arXiv preprint arXiv:2508.10104}, year={2025} } } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
Derify/ModChemBERT-MLM
Derify
2025-09-24T22:41:30Z
0
0
transformers
[ "transformers", "safetensors", "modchembert", "fill-mask", "modernbert", "ModChemBERT", "cheminformatics", "chemical-language-model", "molecular-property-prediction", "custom_code", "dataset:Derify/augmented_canonical_druglike_QED_Pfizer_15M", "arxiv:2505.15696", "arxiv:2503.03360", "arxiv:2412.13663", "arxiv:2407.21783", "license:apache-2.0", "model-index", "autotrain_compatible", "region:us" ]
fill-mask
2025-09-24T01:51:36Z
--- license: apache-2.0 datasets: - Derify/augmented_canonical_druglike_QED_Pfizer_15M metrics: - roc_auc - rmse library_name: transformers tags: - modernbert - ModChemBERT - cheminformatics - chemical-language-model - molecular-property-prediction pipeline_tag: fill-mask model-index: - name: Derify/ModChemBERT-MLM results: - task: type: text-classification name: Classification (ROC AUC) dataset: name: BACE type: BACE metrics: - type: roc_auc value: 0.8065 - task: type: text-classification name: Classification (ROC AUC) dataset: name: BBBP type: BBBP metrics: - type: roc_auc value: 0.7222 - task: type: text-classification name: Classification (ROC AUC) dataset: name: CLINTOX type: CLINTOX metrics: - type: roc_auc value: 0.9709 - task: type: text-classification name: Classification (ROC AUC) dataset: name: HIV type: HIV metrics: - type: roc_auc value: 0.7800 - task: type: text-classification name: Classification (ROC AUC) dataset: name: SIDER type: SIDER metrics: - type: roc_auc value: 0.6419 - task: type: text-classification name: Classification (ROC AUC) dataset: name: TOX21 type: TOX21 metrics: - type: roc_auc value: 0.7400 - task: type: regression name: Regression (RMSE) dataset: name: BACE type: BACE metrics: - type: rmse value: 1.0893 - task: type: regression name: Regression (RMSE) dataset: name: CLEARANCE type: CLEARANCE metrics: - type: rmse value: 49.0005 - task: type: regression name: Regression (RMSE) dataset: name: ESOL type: ESOL metrics: - type: rmse value: 0.8456 - task: type: regression name: Regression (RMSE) dataset: name: FREESOLV type: FREESOLV metrics: - type: rmse value: 0.5491 - task: type: regression name: Regression (RMSE) dataset: name: LIPO type: LIPO metrics: - type: rmse value: 0.7147 --- # ModChemBERT: ModernBERT as a Chemical Language Model ModChemBERT is a ModernBERT-based chemical language model (CLM), trained on SMILES strings for masked language modeling (MLM) and downstream molecular property prediction (classification & regression). ## Usage Install the `transformers` library starting from v4.56.1: ```bash pip install -U transformers>=4.56.1 ``` ### Load Model ```python from transformers import AutoModelForMaskedLM, AutoTokenizer model_id = "Derify/ModChemBERT-MLM" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForMaskedLM.from_pretrained( model_id, trust_remote_code=True, dtype="float16", device_map="auto", ) ``` ### Fill-Mask Pipeline ```python from transformers import pipeline fill = pipeline("fill-mask", model=model, tokenizer=tokenizer) print(fill("c1ccccc1[MASK]")) ``` ## Architecture - Backbone: ModernBERT - Hidden size: 768 - Intermediate size: 1152 - Encoder Layers: 22 - Attention heads: 12 - Max sequence length: 256 tokens (MLM primarily trained with 128-token sequences) - Vocabulary: BPE tokenizer using [MolFormer's vocab](https://github.com/emapco/ModChemBERT/blob/main/modchembert/tokenizers/molformer/vocab.json) (2362 tokens) ## Pooling (Classifier / Regressor Head) Kallergis et al. [1] demonstrated that the CLM embedding method prior to the prediction head can significantly impact downstream performance. Behrendt et al. [2] noted that the last few layers contain task-specific information and that pooling methods leveraging information from multiple layers can enhance model performance. Their results further demonstrated that the `max_seq_mha` pooling method was particularly effective in low-data regimes, which is often the case for molecular property prediction tasks. Multiple pooling strategies are supported by ModChemBERT to explore their impact on downstream performance: - `cls`: Last layer [CLS] - `mean`: Mean over last hidden layer - `max_cls`: Max over last k layers of [CLS] - `cls_mha`: MHA with [CLS] as query - `max_seq_mha`: MHA with max pooled sequence as KV and max pooled [CLS] as query - `sum_mean`: Sum over all layers then mean tokens - `sum_sum`: Sum over all layers then sum tokens - `mean_mean`: Mean over all layers then mean tokens - `mean_sum`: Mean over all layers then sum tokens - `max_seq_mean`: Max over last k layers then mean tokens ## Training Pipeline <div align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/656892962693fa22e18b5331/bxNbpgMkU8m60ypyEJoWQ.png" alt="ModChemBERT Training Pipeline" width="650"/> </div> ### Rationale for MTR Stage Following Sultan et al. [3], multi-task regression (physicochemical properties) biases the latent space toward ADME-related representations prior to narrow TAFT specialization. Sultan et al. observed that MLM + DAPT (MTR) outperforms MLM-only, MTR-only, and MTR + DAPT (MTR). ### Checkpoint Averaging Motivation Inspired by ModernBERT [4], JaColBERTv2.5 [5], and Llama 3.1 [6], where results show that model merging can enhance generalization or performance while mitigating overfitting to any single fine-tune or annealing checkpoint. ## Datasets - Pretraining: [Derify/augmented_canonical_druglike_QED_Pfizer_15M](https://huggingface.co/datasets/Derify/augmented_canonical_druglike_QED_Pfizer_15M) - Domain Adaptive Pretraining (DAPT) & Task Adaptive Fine-tuning (TAFT): ADME + AstraZeneca datasets (10 tasks) with scaffold splits from DA4MT pipeline (see [domain-adaptation-molecular-transformers](https://github.com/emapco/ModChemBERT/tree/main/domain-adaptation-molecular-transformers)) - Benchmarking: ChemBERTa-3 [7] tasks (BACE, BBBP, TOX21, HIV, SIDER, CLINTOX for classification; ESOL, FREESOLV, LIPO, BACE, CLEARANCE for regression) ## Benchmarking Benchmarks were conducted with the ChemBERTa-3 framework using DeepChem scaffold splits. Each task was trained for 100 epochs with 3 random seeds. ### Evaluation Methodology - Classification Metric: ROC AUC. - Regression Metric: RMSE. - Aggregation: Mean ± standard deviation of the triplicate results. - Input Constraints: SMILES truncated / filtered to ≤200 tokens, following the MolFormer paper's recommendation. ### Results <details><summary>Click to expand</summary> #### Classification Datasets (ROC AUC - Higher is better) | Model | BACE↑ | BBBP↑ | CLINTOX↑ | HIV↑ | SIDER↑ | TOX21↑ | AVG† | | ---------------------------------------------------------------------------- | ----------------- | ----------------- | --------------------- | --------------------- | --------------------- | ----------------- | ------ | | **Tasks** | 1 | 1 | 2 | 1 | 27 | 12 | | | [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 0.781 ± 0.019 | 0.700 ± 0.027 | 0.979 ± 0.022 | 0.740 ± 0.013 | 0.611 ± 0.002 | 0.718 ± 0.011 | 0.7548 | | [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 0.819 ± 0.019 | 0.735 ± 0.019 | 0.839 ± 0.013 | 0.762 ± 0.005 | 0.618 ± 0.005 | 0.723 ± 0.012 | 0.7493 | | MoLFormer-LHPC* | **0.887 ± 0.004** | **0.908 ± 0.013** | 0.993 ± 0.004 | 0.750 ± 0.003 | 0.622 ± 0.007 | **0.791 ± 0.014** | 0.8252 | | ------------------------- | ----------------- | ----------------- | ------------------- | ------------------- | ------------------- | ----------------- | ------ | | [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.8065 ± 0.0103 | 0.7222 ± 0.0150 | 0.9709 ± 0.0227 | ***0.7800 ± 0.0133*** | 0.6419 ± 0.0113 | 0.7400 ± 0.0044 | 0.7769 | | [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.8224 ± 0.0156 | 0.7402 ± 0.0095 | 0.9820 ± 0.0138 | 0.7702 ± 0.0020 | 0.6303 ± 0.0039 | 0.7360 ± 0.0036 | 0.7802 | | [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.7924 ± 0.0155 | 0.7282 ± 0.0058 | 0.9725 ± 0.0213 | 0.7770 ± 0.0047 | 0.6542 ± 0.0128 | *0.7646 ± 0.0039* | 0.7815 | | [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.8213 ± 0.0051 | 0.7356 ± 0.0094 | 0.9664 ± 0.0202 | 0.7750 ± 0.0048 | 0.6415 ± 0.0094 | 0.7263 ± 0.0036 | 0.7777 | | [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | *0.8346 ± 0.0045* | *0.7573 ± 0.0120* | ***0.9938 ± 0.0017*** | 0.7737 ± 0.0034 | ***0.6600 ± 0.0061*** | 0.7518 ± 0.0047 | 0.7952 | #### Regression Datasets (RMSE - Lower is better) | Model | BACE↓ | CLEARANCE↓ | ESOL↓ | FREESOLV↓ | LIPO↓ | AVG‡ | | ---------------------------------------------------------------------------- | --------------------- | ---------------------- | --------------------- | --------------------- | --------------------- | ---------------- | | **Tasks** | 1 | 1 | 1 | 1 | 1 | | | [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 1.011 ± 0.038 | 51.582 ± 3.079 | 0.920 ± 0.011 | 0.536 ± 0.016 | 0.758 ± 0.013 | 0.8063 / 10.9614 | | [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 1.094 ± 0.126 | 52.058 ± 2.767 | 0.829 ± 0.019 | 0.572 ± 0.023 | 0.728 ± 0.016 | 0.8058 / 11.0562 | | MoLFormer-LHPC* | 1.201 ± 0.100 | 45.74 ± 2.637 | 0.848 ± 0.031 | 0.683 ± 0.040 | 0.895 ± 0.080 | 0.9068 / 9.8734 | | ------------------------- | ------------------- | -------------------- | ------------------- | ------------------- | ------------------- | ---------------- | | [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 1.0893 ± 0.1319 | 49.0005 ± 1.2787 | 0.8456 ± 0.0406 | 0.5491 ± 0.0134 | 0.7147 ± 0.0062 | 0.7997 / 10.4398 | | [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.9931 ± 0.0258 | 45.4951 ± 0.7112 | 0.9319 ± 0.0153 | 0.6049 ± 0.0666 | 0.6874 ± 0.0040 | 0.8043 / 9.7425 | | [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 1.0304 ± 0.1146 | 47.8418 ± 0.4070 | ***0.7669 ± 0.0024*** | 0.5293 ± 0.0267 | 0.6708 ± 0.0074 | 0.7493 / 10.1678 | | [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.9713 ± 0.0224 | ***42.8010 ± 3.3475*** | 0.8169 ± 0.0268 | 0.5445 ± 0.0257 | 0.6820 ± 0.0028 | 0.7537 / 9.1631 | | [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | ***0.9665 ± 0.0250*** | 44.0137 ± 1.1110 | 0.8158 ± 0.0115 | ***0.4979 ± 0.0158*** | ***0.6505 ± 0.0126*** | 0.7327 / 9.3889 | **Bold** indicates the best result in the column; *italic* indicates the best result among ModChemBERT checkpoints.<br/> \* Published results from the ChemBERTa-3 [7] paper for optimized chemical language models using DeepChem scaffold splits.<br/> † AVG column shows the mean score across all classification tasks.<br/> ‡ AVG column shows the mean scores across all regression tasks without and with the clearance score. </details> ## Optimized ModChemBERT Hyperparameters <details><summary>Click to expand</summary> ### TAFT Datasets Optimal parameters (per dataset) for the `MLM + DAPT + TAFT OPT` merged model: | Dataset | Learning Rate | Batch Size | Warmup Ratio | Classifier Pooling | Last k Layers | | ---------------------- | ------------- | ---------- | ------------ | ------------------ | ------------- | | adme_microsom_stab_h | 3e-5 | 8 | 0.0 | max_seq_mean | 5 | | adme_microsom_stab_r | 3e-5 | 16 | 0.2 | max_cls | 3 | | adme_permeability | 3e-5 | 8 | 0.0 | max_cls | 3 | | adme_ppb_h | 1e-5 | 32 | 0.1 | max_seq_mean | 5 | | adme_ppb_r | 1e-5 | 32 | 0.0 | sum_mean | N/A | | adme_solubility | 3e-5 | 32 | 0.0 | sum_mean | N/A | | astrazeneca_CL | 3e-5 | 8 | 0.1 | max_seq_mha | 3 | | astrazeneca_LogD74 | 1e-5 | 8 | 0.0 | max_seq_mean | 5 | | astrazeneca_PPB | 1e-5 | 32 | 0.0 | max_cls | 3 | | astrazeneca_Solubility | 1e-5 | 32 | 0.0 | max_seq_mean | 5 | ### Benchmarking Datasets Optimal parameters (per dataset) for the `MLM + DAPT + TAFT OPT` merged model: | Dataset | Batch Size | Classifier Pooling | Last k Layers | Pooling Attention Dropout | Classifier Dropout | Embedding Dropout | | ------------------- | ---------- | ------------------ | ------------- | ------------------------- | ------------------ | ----------------- | | bace_classification | 32 | max_seq_mha | 3 | 0.0 | 0.0 | 0.0 | | bbbp | 64 | max_cls | 3 | 0.1 | 0.0 | 0.0 | | clintox | 32 | max_seq_mha | 5 | 0.1 | 0.0 | 0.0 | | hiv | 32 | max_seq_mha | 3 | 0.0 | 0.0 | 0.0 | | sider | 32 | mean | N/A | 0.1 | 0.0 | 0.1 | | tox21 | 32 | max_seq_mha | 5 | 0.1 | 0.0 | 0.0 | | base_regression | 32 | max_seq_mha | 5 | 0.1 | 0.0 | 0.0 | | clearance | 32 | max_seq_mha | 5 | 0.1 | 0.0 | 0.0 | | esol | 64 | sum_mean | N/A | 0.1 | 0.0 | 0.1 | | freesolv | 32 | max_seq_mha | 5 | 0.1 | 0.0 | 0.0 | | lipo | 32 | max_seq_mha | 3 | 0.1 | 0.1 | 0.1 | </details> ## Intended Use * Primary: Research and development for molecular property prediction, experimentation with pooling strategies, and as a foundational model for downstream applications. * Appropriate for: Binary / multi-class classification (e.g., toxicity, activity) and single-task or multi-task regression (e.g., solubility, clearance) after fine-tuning. * Not intended for generating novel molecules. ## Limitations - Out-of-domain performance may degrade for: very long (>128 token) SMILES, inorganic / organometallic compounds, polymers, or charged / enumerated tautomers are not well represented in training. - No guarantee of synthesizability, safety, or biological efficacy. ## Ethical Considerations & Responsible Use - Potential biases arise from training corpora skewed to drug-like space. - Do not deploy in clinical or regulatory settings without rigorous, domain-specific validation. ## Hardware Training and experiments were performed on 2 NVIDIA RTX 3090 GPUs. ## Citation If you use ModChemBERT in your research, please cite the checkpoint and the following: ``` @software{cortes-2025-modchembert, author = {Emmanuel Cortes}, title = {ModChemBERT: ModernBERT as a Chemical Language Model}, year = {2025}, publisher = {GitHub}, howpublished = {GitHub repository}, url = {https://github.com/emapco/ModChemBERT} } ``` ## References 1. Kallergis, Georgios, et al. "Domain adaptable language modeling of chemical compounds identifies potent pathoblockers for Pseudomonas aeruginosa." Communications Chemistry 8.1 (2025): 114. 2. Behrendt, Maike, Stefan Sylvius Wagner, and Stefan Harmeling. "MaxPoolBERT: Enhancing BERT Classification via Layer-and Token-Wise Aggregation." arXiv preprint arXiv:2505.15696 (2025). 3. Sultan, Afnan, et al. "Transformers for molecular property prediction: Domain adaptation efficiently improves performance." arXiv preprint arXiv:2503.03360 (2025). 4. Warner, Benjamin, et al. "Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference." arXiv preprint arXiv:2412.13663 (2024). 5. Clavié, Benjamin. "JaColBERTv2.5: Optimising Multi-Vector Retrievers to Create State-of-the-Art Japanese Retrievers with Constrained Resources." Journal of Natural Language Processing 32.1 (2025): 176-218. 6. Grattafiori, Aaron, et al. "The llama 3 herd of models." arXiv preprint arXiv:2407.21783 (2024). 7. Singh, Riya, et al. "ChemBERTa-3: An Open Source Training Framework for Chemical Foundation Models." (2025).
smoorsmith/peft_test
smoorsmith
2025-09-24T22:40:23Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:smoorsmith/Dream-coder-test", "base_model:adapter:smoorsmith/Dream-coder-test", "region:us" ]
null
2025-09-24T22:40:17Z
--- base_model: smoorsmith/Dream-coder-test library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
yufeng1/OpenThinker-7B-reasoning-lora-merged-OT-hard-type1-e3
yufeng1
2025-09-24T22:38:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T22:38:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jq/qwen3-32b-sunflower-dpo-sft-20250924
jq
2025-09-24T22:38:18Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:jq/sunflower-32b-pretrained", "base_model:finetune:jq/sunflower-32b-pretrained", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-24T22:38:05Z
--- base_model: jq/sunflower-32b-pretrained tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** jq - **License:** apache-2.0 - **Finetuned from model :** jq/sunflower-32b-pretrained This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Dendiiiii/llava-1.5-7b-dcvlr-team-2
Dendiiiii
2025-09-24T22:38:08Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-24T22:38:08Z
--- license: apache-2.0 ---
jerrimu/libremodel
jerrimu
2025-09-24T22:37:42Z
0
0
null
[ "safetensors", "llama", "region:us" ]
null
2025-06-29T04:19:46Z
--- license: cc0-1.0 datasets: - PleIAs/common_corpus - isaacus/mteb-GovReport - sedthh/gutenberg_english - wikimedia/wikipedia language: - en LibreModel I (0.96B) Model Description LibreModel I is a 960M parameter language model trained exclusively on copyright-free, public domain data using a novel 4-phase curriculum learning approach. This model demonstrates that competitive language models can be built without relying on copyrighted content, making AI development more accessible and legally clear. Key Innovation: First model to use curriculum learning with public domain data exclusively, proving that copyright-free training can achieve competitive results at a fraction of typical training costs ($500 total budget). Model Details Model Type: Causal Language Model (GPT-style) Parameters: 960M (0.96B) Architecture: LlamaConfig with optimizations Context Length: 3,072 tokens Vocabulary Size: 128,256 (LLaMA 3 tokenizer) Training Tokens: 19.2B (Chinchilla-optimal) Training Cost: ~$500 using AWS spot instances Architecture Features Layers: 22 transformer layers Attention Heads: 24 total, 8 key-value heads (3:1 GQA) Hidden Size: 1,536 Sink Tokens: 4 persistent context tokens for improved long-range attention Optimizations: Flash Attention 2, gradient checkpointing, bf16 mixed precision 4-Phase Curriculum Training Phase 1: Foundation (0-8%) 70% Project Gutenberg (literature, classics) 30% Government Reports (analytical structure) Phase 2: Diversification (8-20%) 50% Project Gutenberg 45% Wikipedia (factual knowledge) 5% Government Reports Phase 3: Advanced Reasoning (20-40%) 40% Project Gutenberg 30% Harvard Legal Cases (logical reasoning) 30% Wikipedia Phase 4: Optimization (40-100%) 40% Project Gutenberg 30% Wikipedia 30% OpenGovernment (diverse analytical content) Note: Harvard legal data was eliminated after 40% due to persistent training instabilities, replaced with OpenGovernment data for better stability while maintaining reasoning patterns. Training Data Sources (100% Public Domain) Project Gutenberg: Classical literature, philosophy, science texts Wikipedia: Encyclopedia articles and factual content Government Documents: Policy papers, reports, legal documents OpenGovernment: Diverse government publications and analyses Total: ~19.2B tokens across all phases, with careful curation to ensure public domain status. This is a base model and not ready for use. We are beginning post-training end of month and will upload once done. GGUFs can be found at https://github.com/openconstruct/libremodel/releases---
xnftraff/Qwen3-0.6B-Gensyn-Swarm-graceful_dappled_owl
xnftraff
2025-09-24T22:37:04Z
34
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am graceful_dappled_owl", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-25T18:02:00Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am graceful_dappled_owl --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tomal66/gemma3-1b-sarcasm-T3
tomal66
2025-09-24T22:35:46Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-24T22:35:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AntonDergunov/LunarLander_PPO
AntonDergunov
2025-09-24T22:31:23Z
0
0
Stable-Baselines3
[ "Stable-Baselines3", "reinforcement-learning", "deep-reinforcement-learning", "PPO", "LunarLander-v3", "model-index", "region:us" ]
reinforcement-learning
2025-09-13T15:07:19Z
--- library_name: Stable-Baselines3 tags: - reinforcement-learning - deep-reinforcement-learning - PPO - LunarLander-v3 model-index: - name: AntonDergunov/LunarLander_PPO results: - task: type: reinforcement-learning name: Reinforcement Learning dataset: name: LunarLander-v3 type: environment metrics: - type: mean_reward value: 270.7984945 name: Mean Reward - type: std_reward value: 22.49963279810933 name: Std Reward --- # AntonDergunov/LunarLander_PPO This model was trained with **PPO** using **Stable-Baselines3** on **LunarLander-v3**. ## Usage ```python import gymnasium as gym from stable_baselines3 import PPO from huggingface_hub import hf_hub_download model_path = hf_hub_download(repo_id="AntonDergunov/LunarLander_PPO", filename="model.zip") model = PPO.load(model_path, device="cpu") env = gym.make("LunarLander-v3") obs, info = env.reset() for _ in range(1000): action, _ = model.predict(obs, deterministic=True) obs, reward, terminated, truncated, info = env.step(action) if terminated or truncated: obs, info = env.reset() ```
BlinkDL/temp-latest-training-models
BlinkDL
2025-09-24T22:23:55Z
0
63
null
[ "region:us" ]
null
2023-08-05T11:20:33Z
ERROR: type should be string, got "https://rwkv.com\n\nhttps://x.com/BlinkDL_AI\n\nHere are some cutting-edge preview RWKV models: https://huggingface.co/BlinkDL/temp-latest-training-models/tree/main\n\nincluding RWKV-7 checkpts (already supported by https://pypi.org/project/rwkv/ and https://github.com/Ai00-X/ai00_server)\n\nRef: https://huggingface.co/BlinkDL/rwkv-7-world\n\n\"single_round_qa\" are \"states\" (check https://x.com/BlinkDL_AI/status/1788354345807057035) and these are single round Q&A states\n\ndata_sample is random subsample of world dataset. note: due to technical reasons (very complicated due to my horrible messy code), some distill instruct data are not included, and only subsamples of these instruct data are included:\nflan, Buzz-V12, WebInstructSub, SKGInstruct, PIPPA, COIG-PC-core"
corzamennav/blockassist-bc-territorial_wild_antelope_1758752566
corzamennav
2025-09-24T22:23:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-24T22:23:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
gustavokuklinski/aeon-GGUF
gustavokuklinski
2025-09-24T22:20:27Z
577
1
null
[ "gguf", "llama.cpp", "en", "dataset:gustavokuklinski/aeon", "base_model:gustavokuklinski/aeon-360m", "base_model:quantized:gustavokuklinski/aeon-360m", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-09T21:48:41Z
--- license: mit datasets: - gustavokuklinski/aeon language: - en base_model: - gustavokuklinski/aeon-360m tags: - llama.cpp --- ![alt text](https://raw.githubusercontent.com/gustavokuklinski/aeon.ai/refs/heads/main/docs/assets/img/aeon-logo.png) # AEON AEON is portable, private, and capable of operating fully offline. It democratizes access to powerful, dynamic AI capabilities for a wider audience, regardless of their hardware. The finetuned model was build to be like a "friend" for RAG personal files and work with insights. #### Docs - **Page** [aeon.ai](https://gustavokuklinski.github.io/aeon.ai) - **Github Project:** [AEON.ai](https://github.com/gustavokuklinski/aeon.ai/) - **Github LLM Finetune Scripts:** [AEON.llm](https://github.com/gustavokuklinski/aeon.llm/) # Using Aeon AEON uses Python with virtual environment and `git lfs` installed. ```shell /$ git lfs install # With plugins /$ git clone --recurse-submodules https://github.com/gustavokuklinski/aeon.ai.git # Without plugins /$ git clone https://github.com/gustavokuklinski/aeon.ai.git ``` ```shell # Create .venv /$ python -m venv .venv # Start virtual env /$ source .venv/bin/activate # Run check and install dependencies /$ python3 scripts/install.py # Start AEON /$ python3 aeon.py ``` ### Using Docker ```bash docker build -t aeon . docker run -it --rm -p 7860:7860 -v "$(pwd):/app" aeon ``` ### Tested on | OS | CPU | GPU | RAM | |:---|:---|:---|:---| | Ubuntu 24.04.2 LTS | Intel i7-10510U | Intel CometLake-U GT2 | 16GB | | Windows 11 Home Edition | Intel i7-10510U | Intel CometLake-U GT2 | 8GB | ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo gustavokuklinski/aeon-GGUF --hf-file aeon-360M.Q8_0.gguf -p "What is a virtual species?" ``` ### Server: ```bash llama-server --hf-repo gustavokuklinski/aeon-GGUF --hf-file aeon-360M.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo gustavokuklinski/aeon-GGUF --hf-file aeon-360M.Q8_0.gguf -p "What is a virtual species?" ``` or ``` ./llama-server --hf-repo gustavokuklinski/aeon-GGUF --hf-file aeon-360M.Q8_0.gguf -c 2048 ```
crystalline7/1532124
crystalline7
2025-09-24T22:20:03Z
0
0
null
[ "region:us" ]
null
2025-09-24T22:19:44Z
[View on Civ Archive](https://civarchive.com/models/1443466?modelVersionId=1631772)
corzamennav/blockassist-bc-territorial_wild_antelope_1758751949
corzamennav
2025-09-24T22:13:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-24T22:13:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).