modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-02 12:32:32
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
534 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-02 12:31:20
card
stringlengths
11
1.01M
AnerYubo/blockassist-bc-elusive_mammalian_termite_1756714210
AnerYubo
2025-09-01T08:10:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "elusive mammalian termite", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:10:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - elusive mammalian termite --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
itsmanikumar/gpt-oss-20b-multilingual-reasoner
itsmanikumar
2025-09-01T08:08:32Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "dataset:HuggingFaceH4/Multilingual-Thinking", "base_model:openai/gpt-oss-20b", "base_model:finetune:openai/gpt-oss-20b", "endpoints_compatible", "region:us" ]
null
2025-09-01T07:51:15Z
--- base_model: openai/gpt-oss-20b datasets: HuggingFaceH4/Multilingual-Thinking library_name: transformers model_name: gpt-oss-20b-multilingual-reasoner tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for gpt-oss-20b-multilingual-reasoner This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="itsmanikumar/gpt-oss-20b-multilingual-reasoner", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.22.1 - Transformers: 4.56.0 - Pytorch: 2.9.0.dev20250804+cu128 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF
mradermacher
2025-09-01T08:05:35Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k", "base_model:quantized:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-09-01T06:52:35Z
--- base_model: EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ1_M.gguf) | i1-IQ1_M | 0.5 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.1 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q4_0.gguf) | i1-Q4_0 | 1.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.1 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q4_1.gguf) | i1-Q4_1 | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q6_K.gguf) | i1-Q6_K | 1.5 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756713850
Ferdi3425
2025-09-01T08:05:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:05:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
betreosi/blockassist-bc-stinging_prowling_lion_1756713877
betreosi
2025-09-01T08:05:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stinging prowling lion", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:05:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stinging prowling lion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
faisu-eth/blockassist-bc-thick_twitchy_jackal_1756713764
faisu-eth
2025-09-01T08:03:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thick twitchy jackal", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:03:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thick twitchy jackal --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Nerva1228/gushinn
Nerva1228
2025-09-01T07:57:36Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-01T07:57:35Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: gushinn --- # Gushinn <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `gushinn` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "gushinn", "lora_weights": "https://huggingface.co/Nerva1228/gushinn/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Nerva1228/gushinn', weight_name='lora.safetensors') image = pipeline('gushinn').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Nerva1228/gushinn/discussions) to add images that show off what you’ve made with this LoRA.
david3621/blockassist-bc-gentle_meek_cat_1756712133
david3621
2025-09-01T07:54:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle meek cat", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T07:51:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle meek cat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
varshithkumar/wbc_resnet50
varshithkumar
2025-09-01T07:49:11Z
9
0
keras
[ "keras", "tf-keras", "tensorflow", "image-classification", "license:apache-2.0", "region:us" ]
image-classification
2025-08-29T13:41:33Z
--- --- pipeline_tag: image-classification tags: - keras - tensorflow - image-classification license: apache-2.0 --- # WBC ResNet50 This is a ResNet50 model trained on WBC dataset using Keras.
Wan-AI/Wan2.2-S2V-14B
Wan-AI
2025-09-01T07:48:57Z
10,708
213
diffusers
[ "diffusers", "safetensors", "s2v", "other", "arxiv:2508.18621", "arxiv:2503.20314", "license:apache-2.0", "region:us" ]
other
2025-08-25T02:38:55Z
--- license: apache-2.0 pipeline_tag: other library_name: diffusers --- # Wan2.2-S2V-14B: Audio-Driven Cinematic Video Generation This repository features the **Wan2.2-S2V-14B** model, designed for audio-driven cinematic video generation. It was introduced in the paper: [**Wan-S2V: Audio-Driven Cinematic Video Generation**](https://huggingface.co/papers/2508.18621) <p align="center"> <img src="assets/logo.png" width="400"/> <p> <p align="center"> 💜 <a href="https://wan.video"><b>Wan Homepage</b></a> &nbsp&nbsp | &nbsp&nbsp 🖥️ <a href="https://github.com/Wan-Video/Wan2.2">GitHub</a> &nbsp&nbsp | &nbsp&nbsp🤗 <a href="https://huggingface.co/Wan-AI/">Hugging Face Organization</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/Wan-AI">ModelScope Organization</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://huggingface.co/papers/2508.18621">Wan-S2V Paper</a> &nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://arxiv.org/abs/2503.20314">Wan2.2 Base Paper</a> &nbsp&nbsp | 🌐 <a href="https://humanaigc.github.io/wan-s2v-webpage">Project Page</a> &nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://wan.video/welcome?spm=a2ty_o02.30011076.0.0.6c9ee41eCcluqg">Blog</a> &nbsp&nbsp | &nbsp&nbsp 💬 <a href="https://discord.gg/AKNgpMK4Yj">Discord</a>&nbsp&nbsp <br> 📕 <a href="https://alidocs.dingtalk.com/i/nodes/jb9Y4gmKWrx9eo4dCql9LlbYJGXn6lpz">使用指南(中文)</a>&nbsp&nbsp | &nbsp&nbsp 📘 <a href="https://alidocs.dingtalk.com/i/nodes/EpGBa2Lm8aZxe5myC99MelA2WgN7R35y">User Guide(English)</a>&nbsp&nbsp | &nbsp&nbsp💬 <a href="https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg">WeChat(微信)</a>&nbsp&nbsp <br> ## Abstract (Wan-S2V Paper) Current state-of-the-art (SOTA) methods for audio-driven character animation demonstrate promising performance for scenarios primarily involving speech and singing. However, they often fall short in more complex film and television productions, which demand sophisticated elements such as nuanced character interactions, realistic body movements, and dynamic camera work. To address this long-standing challenge of achieving film-level character animation, we propose an audio-driven model, which we refere to as Wan-S2V, built upon Wan. Our model achieves significantly enhanced expressiveness and fidelity in cinematic contexts compared to existing approaches. We conducted extensive experiments, benchmarking our method against cutting-edge models such as Hunyuan-Avatar and Omnihuman. The experimental results consistently demonstrate that our approach significantly outperforms these existing solutions. Additionally, we explore the versatility of our method through its applications in long-form video generation and precise video lip-sync editing. ----- [**Wan: Open and Advanced Large-Scale Video Generative Models**](https://arxiv.org/abs/2503.20314) <br> We are excited to introduce **Wan2.2**, a major upgrade to our foundational video models. With **Wan2.2**, we have focused on incorporating the following innovations: - 👍 **Effective MoE Architecture**: Wan2.2 introduces a Mixture-of-Experts (MoE) architecture into video diffusion models. By separating the denoising process cross timesteps with specialized powerful expert models, this enlarges the overall model capacity while maintaining the same computational cost. - 👍 **Cinematic-level Aesthetics**: Wan2.2 incorporates meticulously curated aesthetic data, complete with detailed labels for lighting, composition, contrast, color tone, and more. This allows for more precise and controllable cinematic style generation, facilitating the creation of videos with customizable aesthetic preferences. - 👍 **Complex Motion Generation**: Compared to Wan2.1, Wan2.2 is trained on a significantly larger data, with +65.6% more images and +83.2% more videos. This expansion notably enhances the model's generalization across multiple dimensions such as motions, semantics, and aesthetics, achieving TOP performance among all open-sourced and closed-sourced models. - 👍 **Efficient High-Definition Hybrid TI2V**: Wan2.2 open-sources a 5B model built with our advanced Wan2.2-VAE that achieves a compression ratio of **16×16×4**. This model supports both text-to-video and image-to-video generation at 720P resolution with 24fps and can also run on consumer-grade graphics cards like 4090. It is one of the fastest **720P@24fps** models currently available, capable of serving both the industrial and academic sectors simultaneously. ## Video Demos <div align="center"> <video width="80%" controls> <source src="https://cloud.video.taobao.com/vod/4szTT1B0LqXvJzmuEURfGRA-nllnqN_G2AT0ZWkQXoQ.mp4" type="video/mp4"> Your browser does not support the video tag. </video> </div> ## 🔥 Latest News!! * Aug 26, 2025: 🎵 We introduce **[Wan2.2-S2V-14B](https://humanaigc.github.io/wan-s2v-webpage)**, an audio-driven cinematic video generation model, including [inference code](#run-speech-to-video-generation), [model weights](#model-download), and [technical report](https://humanaigc.github.io/wan-s2v-webpage/content/wan-s2v.pdf)! Now you can try it on [wan.video](https://wan.video/), [ModelScope Gradio](https://www.modelscope.cn/studios/Wan-AI/Wan2.2-S2V) or [HuggingFace Gradio](https://huggingface.co/spaces/Wan-AI/Wan2.2-S2V)! * Jul 28, 2025: 👋 We have open a [HF space](https://huggingface.co/spaces/Wan-AI/Wan-2.2-5B) using the TI2V-5B model. Enjoy! * Jul 28, 2025: 👋 Wan2.2 has been integrated into ComfyUI ([CN](https://docs.comfy.org/zh-CN/tutorials/video/wan/wan2_2) | [EN](https://docs.comfy.org/tutorials/video/wan/wan2_2)). Enjoy! * Jul 28, 2025: 👋 Wan2.2's T2V, I2V and TI2V have been integrated into Diffusers ([T2V-A14B](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B-Diffusers) | [I2V-A14B](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B-Diffusers) | [TI2V-5B](https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B-Diffusers)). Feel free to give it a try! * Jul 28, 2025: 👋 We've released the inference code and model weights of **Wan2.2**. ## Community Works If your research or project builds upon [**Wan2.1**](https://github.com/Wan-Video/Wan2.1) or [**Wan2.2**](https://github.com/Wan-Video/Wan2.2), and you would like more people to see it, please inform us. - [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio) provides comprehensive support for Wan 2.2, including low-GPU-memory layer-by-layer offload, FP8 quantization, sequence parallelism, LoRA training, full training. - [Kijai's ComfyUI WanVideoWrapper](https://github.com/kijai/ComfyUI-WanVideoWrapper) is an alternative implementation of Wan models for ComfyUI. Thanks to its Wan-only focus, it's on the frontline of getting cutting edge optimizations and hot research features, which are often hard to integrate into ComfyUI quickly due to its more rigid structure. ## 📑 Todo List - Wan2.2-S2V Speech-to-Video - [x] Inference code of Wan2.2-S2V - [x] Checkpoints of Wan2.2-S2V-14B - [ ] ComfyUI integration - [ ] Diffusers integration ## Run Wan2.2 #### Installation Clone the repo: ```sh git clone https://github.com/Wan-Video/Wan2.2.git cd Wan2.2 ``` Install dependencies: ```sh # Ensure torch >= 2.4.0 # If the installation of `flash_attn` fails, try installing the other packages first and install `flash_attn` last pip install -r requirements.txt ``` #### Model Download | Models | Download Links | Description | |--------------------|---------------------------------------------------------------------------------------------------------------------------------------------|-------------| | T2V-A14B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-T2V-A14B) | Text-to-Video MoE model, supports 480P & 720P | | I2V-A14B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-I2V-A14B) | Image-to-Video MoE model, supports 480P & 720P | | TI2V-5B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-TI2V-5B) | High-compression VAE, T2V+I2V, supports 720P | | S2V-14B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-S2V-14B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-S2V-14B) | Speech-to-Video model, supports 480P & 720P | Download models using huggingface-cli: ``` sh pip install "huggingface_hub[cli]" huggingface-cli download Wan-AI/Wan2.2-S2V-14B --local-dir ./Wan2.2-S2V-14B ``` Download models using modelscope-cli: ``` sh pip install modelscope modelscope download Wan-AI/Wan2.2-S2V-14B --local_dir ./Wan2.2-S2V-14B ``` #### Run Speech-to-Video Generation This repository supports the `Wan2.2-S2V-14B` Speech-to-Video model and can simultaneously support video generation at 480P and 720P resolutions. - Single-GPU Speech-to-Video inference ```sh python generate.py --task s2v-14B --size 1024*704 --ckpt_dir ./Wan2.2-S2V-14B/ --offload_model True --convert_model_dtype --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard." --image "examples/i2v_input.JPG" --audio "examples/talk.wav" # Without setting --num_clip, the generated video length will automatically adjust based on the input audio length ``` > 💡 This command can run on a GPU with at least 80GB VRAM. - Multi-GPU inference using FSDP + DeepSpeed Ulysses ```sh torchrun --nproc_per_node=8 generate.py --task s2v-14B --size 1024*704 --ckpt_dir ./Wan2.2-S2V-14B/ --dit_fsdp --t5_fsdp --ulysses_size 8 --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard." --image "examples/i2v_input.JPG" --audio "examples/talk.wav" ``` - Pose + Audio driven generation ```sh torchrun --nproc_per_node=8 generate.py --task s2v-14B --size 1024*704 --ckpt_dir ./Wan2.2-S2V-14B/ --dit_fsdp --t5_fsdp --ulysses_size 8 --prompt "a person is singing" --image "examples/pose.png" --audio "examples/sing.MP3" --pose_video "./examples/pose.mp4" ``` > 💡For the Speech-to-Video task, the `size` parameter represents the area of the generated video, with the aspect ratio following that of the original input image. > 💡The model can generate videos from audio input combined with reference image and optional text prompt. > 💡The `--pose_video` parameter enables pose-driven generation, allowing the model to follow specific pose sequences while generating videos synchronized with audio input. > 💡The `--num_clip` parameter controls the number of video clips generated, useful for quick preview with shorter generation time. ## Computational Efficiency on Different GPUs We test the computational efficiency of different **Wan2.2** models on different GPUs in the following table. The results are presented in the format: **Total time (s) / peak GPU memory (GB)**. <div align="center"> <img src="assets/comp_effic.png" alt="" style="width: 80%;" /> </div> > The parameter settings for the tests presented in this table are as follows: > (1) Multi-GPU: 14B: `--ulysses_size 4/8 --dit_fsdp --t5_fsdp`, 5B: `--ulysses_size 4/8 --offload_model True --convert_model_dtype --t5_cpu`; Single-GPU: 14B: `--offload_model True --convert_model_dtype`, 5B: `--offload_model True --convert_model_dtype --t5_cpu` (--convert_model_dtype converts model parameter types to config.param_dtype); > (2) The distributed testing utilizes the built-in FSDP and Ulysses implementations, with FlashAttention3 deployed on Hopper architecture GPUs; > (3) Tests were run without the `--use_prompt_extend` flag; > (4) Reported results are the average of multiple samples taken after the warm-up phase. ------- ## Introduction of Wan2.2 **Wan2.2** builds on the foundation of Wan2.1 with notable improvements in generation quality and model capability. This upgrade is driven by a series of key technical innovations, mainly including the Mixture-of-Experts (MoE) architecture, upgraded training data, and high-compression video generation. ##### (1) Mixture-of-Experts (MoE) Architecture Wan2.2 introduces Mixture-of-Experts (MoE) architecture into the video generation diffusion model. MoE has been widely validated in large language models as an efficient approach to increase total model parameters while keeping inference cost nearly unchanged. In Wan2.2, the A14B model series adopts a two-expert design tailored to the denoising process of diffusion models: a high-noise expert for the early stages, focusing on overall layout; and a low-noise expert for the later stages, refining video details. Each expert model has about 14B parameters, resulting in a total of 27B parameters but only 14B active parameters per step, keeping inference computation and GPU memory nearly unchanged. <div align="center"> <img src="assets/moe_arch.png" alt="" style="width: 90%;" /> </div> The transition point between the two experts is determined by the signal-to-noise ratio (SNR), a metric that decreases monotonically as the denoising step $t$ increases. At the beginning of the denoising process, $t$ is large and the noise level is high, so the SNR is at its minimum, denoted as ${SNR}_{min}$. In this stage, the high-noise expert is activated. We define a threshold step ${t}_{moe}$ corresponding to half of the ${SNR}_{min}$, and switch to the low-noise expert when $t<{t}_{moe}$. <div align="center"> <img src="assets/moe_2.png" alt="" style="width: 90%;" /> </div> To validate the effectiveness of the MoE architecture, four settings are compared based on their validation loss curves. The baseline **Wan2.1** model does not employ the MoE architecture. Among the MoE-based variants, the **Wan2.1 & High-Noise Expert** reuses the Wan2.1 model as the low-noise expert while uses the Wan2.2's high-noise expert, while the **Wan2.1 & Low-Noise Expert** uses Wan2.1 as the high-noise expert and employ the Wan2.2's low-noise expert. The **Wan2.2 (MoE)** (our final version) achieves the lowest validation loss, indicating that its generated video distribution is closest to ground-truth and exhibits superior convergence. ##### (2) Efficient High-Definition Hybrid TI2V To enable more efficient deployment, Wan2.2 also explores a high-compression design. In addition to the 27B MoE models, a 5B dense model, i.e., TI2V-5B, is released. It is supported by a high-compression Wan2.2-VAE, which achieves a $T\times H\times W$ compression ratio of $4\times16\times16$, increasing the overall compression rate to 64 while maintaining high-quality video reconstruction. With an additional patchification layer, the total compression ratio of TI2V-5B reaches $4\times32\times32$. Without specific optimization, TI2V-5B can generate a 5-second 720P video in under 9 minutes on a single consumer-grade GPU, ranking among the fastest 720P@24fps video generation models. This model also natively supports both text-to-video and image-to-video tasks within a single unified framework, covering both academic research and practical applications. <div align="center"> <img src="assets/vae.png" alt="" style="width: 80%;" /> </div> ##### Comparisons to SOTAs We compared Wan2.2 with leading closed-source commercial models on our new Wan-Bench 2.0, evaluating performance across multiple crucial dimensions. The results demonstrate that Wan2.2 achieves superior performance compared to these leading models. <div align="center"> <img src="assets/performance.png" alt="" style="width: 90%;" /> </div> ## Citation If you find our work helpful, please cite us. ``` @article{wan2025, title={Wan: Open and Advanced Large-Scale Video Generative Models}, author={Team Wan and Ang Wang and Baole Ai and Bin Wen and Chaojie Mao and Chen-Wei Xie and Di Chen and Feiwu Yu and Haiming Zhao and Jianxiao Yang and Jianyuan Zeng and Jiayu Wang and Jingfeng Zhang and Jingren Zhou and Jinkai Wang and Jixuan Chen and Kai Zhu and Kang Zhao and Keyu Yan and Lianghua Huang and Mengyang Feng and Ningyi Zhang and Pandeng Li and Pingyu Wu and Ruihang Chu and Ruili Feng and Shiwei Zhang and Siyang Sun and Tao Fang and Tianxing Wang and Tianyi Gui and Tingyu Weng and Tong Shen and Wei Lin and Wei Wang and Wei Wang and Wenmeng Zhou and Wente Wang and Wenting Shen and Wenyuan Yu and Xianzhong Shi and Xiaoming Huang and Xin Xu and Yan Kou and Yangyu Lv and Yifei Li and Yijing Liu and Yiming Wang and Yingya Zhang and Yitong Huang and Yong Li and You Wu and Yu Liu and Yulin Pan and Yun Zheng and Yuntao Hong and Yupeng Shi and Yutong Feng and Zeyinzi Jiang and Zhen Han and Zhi-Fan Wu and Ziyu Liu}, journal = {arXiv preprint arXiv:2503.20314}, year={2025} } @article{wan2025s2v, title={Wan-S2V:Audio-Driven Cinematic Video Generation}, author={Xin Gao, Li Hu, Siqi Hu, Mingyang Huang, Chaonan Ji, Dechao Meng, Jinwei Qi, Penchong Qiao, Zhen Shen, Yafei Song, Ke Sun, Linrui Tian, Guangyuan Wang, Qi Wang, Zhongjian Wang, Jiayu Xiao, Sheng Xu, Bang Zhang, Peng Zhang, Xindi Zhang, Zhe Zhang, Jingren Zhou, Lian Zhuo}, journal={arXiv preprint arXiv:2508.18621}, year={2025} } ``` ## License Agreement The models in this repository are licensed under the Apache 2.0 License. We claim no rights over the your generated contents, granting you the freedom to use them while ensuring that your usage complies with the provisions of this license. You are fully accountable for your use of the models, which must not involve sharing any content that violates applicable laws, causes harm to individuals or groups, disseminates personal information intended for harm, spreads misinformation, or targets vulnerable populations. For a complete list of restrictions and details regarding your rights, please refer to the full text of the [license](LICENSE.txt). ## Acknowledgements We would like to thank the contributors to the [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [Qwen](https://huggingface.co/Qwen), [umt5-xxl](https://huggingface.co/google/umt5-xxl), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research. ## Contact Us If you would like to leave a message to our research or product teams, feel free to join our [Discord](https://discord.gg/AKNgpMK4Yj) or [WeChat groups](https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg)!
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756712662
Ferdi3425
2025-09-01T07:45:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T07:45:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnonymousCS/populism_classifier_399
AnonymousCS
2025-09-01T07:44:57Z
9
0
transformers
[ "transformers", "safetensors", "rembert", "text-classification", "generated_from_trainer", "base_model:google/rembert", "base_model:finetune:google/rembert", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-31T21:49:19Z
--- library_name: transformers license: apache-2.0 base_model: google/rembert tags: - generated_from_trainer metrics: - accuracy model-index: - name: populism_classifier_399 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # populism_classifier_399 This model is a fine-tuned version of [google/rembert](https://huggingface.co/google/rembert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6043 - Accuracy: 0.9498 - 1-f1: 0.0 - 1-recall: 0.0 - 1-precision: 0.0 - Balanced Acc: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:--------:|:-----------:|:------------:| | 0.3688 | 1.0 | 130 | 0.6023 | 0.9498 | 0.0 | 0.0 | 0.0 | 0.5 | | 0.1849 | 2.0 | 260 | 0.6466 | 0.9498 | 0.0 | 0.0 | 0.0 | 0.5 | | 0.8075 | 3.0 | 390 | 0.6043 | 0.9498 | 0.0 | 0.0 | 0.0 | 0.5 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.4.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
Yuchan5386/SmoliteXL-2
Yuchan5386
2025-09-01T07:40:19Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-01T07:40:19Z
--- license: apache-2.0 ---
AnonymousCS/populism_classifier_398
AnonymousCS
2025-09-01T07:38:29Z
3
0
transformers
[ "transformers", "safetensors", "rembert", "text-classification", "generated_from_trainer", "base_model:google/rembert", "base_model:finetune:google/rembert", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-31T21:41:45Z
--- library_name: transformers license: apache-2.0 base_model: google/rembert tags: - generated_from_trainer metrics: - accuracy model-index: - name: populism_classifier_398 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # populism_classifier_398 This model is a fine-tuned version of [google/rembert](https://huggingface.co/google/rembert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6761 - Accuracy: 0.9226 - 1-f1: 0.0 - 1-recall: 0.0 - 1-precision: 0.0 - Balanced Acc: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:--------:|:-----------:|:------------:| | 0.5666 | 1.0 | 88 | 0.6764 | 0.9226 | 0.0 | 0.0 | 0.0 | 0.5 | | 0.5654 | 2.0 | 176 | 0.6810 | 0.9226 | 0.0 | 0.0 | 0.0 | 0.5 | | 0.6907 | 3.0 | 264 | 0.6758 | 0.9226 | 0.0 | 0.0 | 0.0 | 0.5 | | 0.4527 | 4.0 | 352 | 0.6837 | 0.9226 | 0.0 | 0.0 | 0.0 | 0.5 | | 0.6271 | 5.0 | 440 | 0.6761 | 0.9226 | 0.0 | 0.0 | 0.0 | 0.5 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.4.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
walbosui/blockassist-bc-miniature_playful_walrus_1756712234
walbosui
2025-09-01T07:38:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "miniature playful walrus", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T07:37:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - miniature playful walrus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zaydzuhri/top-340M-4096-model
zaydzuhri
2025-09-01T07:37:55Z
22
0
null
[ "safetensors", "top_transformer", "arxiv:2508.19228", "arxiv:1910.09700", "region:us" ]
null
2025-09-01T07:07:58Z
# This model is used in arxiv.org/abs/2508.19228 # Token Order Prediction --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zaydzuhri/vanilla-340M-4096-model
zaydzuhri
2025-09-01T07:37:03Z
114
0
null
[ "safetensors", "transformer", "arxiv:2504.20966", "arxiv:2508.19228", "region:us" ]
null
2025-04-21T07:15:55Z
# This model is from the paper arxiv.org/abs/2504.20966 # Softpick: No Attention Sink, No Massive Activations with Rectified Softmax # Also used in arxiv.org/abs/2508.19228 # Token Order Prediction See code: https://github.com/zaydzuhri/softpick-attention This model is only usable through these repositories: https://github.com/zaydzuhri/flash-linear-attention/tree/softpick-attention https://github.com/zaydzuhri/flame/tree/softpick-attention
TryCAEAIXR/gemma-3-270m-it-blr-slang
TryCAEAIXR
2025-09-01T07:27:24Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T06:00:08Z
--- base_model: google/gemma-3-270m-it library_name: transformers model_name: gemma-3-270m-it-blr-slang tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for gemma-3-270m-it-blr-slang This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="TryCAEAIXR/gemma-3-270m-it-blr-slang", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.22.1 - Transformers: 4.56.0 - Pytorch: 2.8.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
2hpsatt/blockassist-bc-huge_deft_eagle_1756711582
2hpsatt
2025-09-01T07:27:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "huge deft eagle", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T07:27:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - huge deft eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
arif696/blockassist-bc-regal_spotted_pelican_1756711000
arif696
2025-09-01T07:18:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T07:17:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AssanaliAidarkhan/qwen-medical-rag
AssanaliAidarkhan
2025-09-01T07:17:08Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-31T11:08:03Z
--- title: Qwen Medical RAG System emoji: 🏥 colorFrom: green colorTo: blue sdk: gradio app_file: app.py pinned: false license: apache-2.0 --- # Qwen Medical RAG System Medical advisory system using Qwen 1.5 0.5B for ACL injury analysis. ## Knowledge Base Categories This system provides advice for: - `partial_acl_injury` - Partial ACL damage with some intact fibers - `partial_acl_fiber_disruption` - Partial fiber disruption requiring evaluation - `complete_acl_tear` - Complete ACL rupture requiring surgery - `acl_sprain` - ACL strain with conservative treatment ## Files - `medical_knowledge.json`: ACL medical knowledge base (4 categories) - `rag_config.json`: System configuration ## Disclaimer For research and educational purposes only. Not for clinical diagnosis. Always consult qualified medical professionals.
canvascomputing/malwi
canvascomputing
2025-09-01T07:13:18Z
0
0
null
[ "safetensors", "distilbert", "arxiv:2404.04991", "arxiv:2504.14886", "license:mit", "region:us" ]
null
2025-09-01T07:11:08Z
--- license: mit --- # malwi - AI Python Malware Scanner <img src="malwi-logo.png" alt="Logo"> ## malwi specializes in finding malware ### Key Features - 🛡️ **AI-Powered Python Malware Detection**: Leverages advanced AI to identify malicious code in Python projects with high accuracy. - ⚡ **Lightning-Fast Codebase Scanning**: Scans entire repositories in seconds, so you can focus on development—not security worries. - 🔒 **100% Offline & Private**: Your code never leaves your machine. Full control, zero data exposure. - 💰 **Free & Open-Source**: No hidden costs. Built on transparent research and openly available data. - 🇪🇺 **Developed in the EU**: Committed to open-source principles and European data standards. ### 1) Install ``` pip install --user malwi ``` ### 2) Run ```bash malwi scan examples/malicious ``` ### 3) Evaluate: a [recent zero-day](https://socket.dev/blog/malicious-pypi-package-targets-discord-developers-with-RAT) detected with high confidence ``` __ __ .--------.---.-| .--.--.--|__| | | _ | | | | | | |__|__|__|___._|__|________|__| AI Python Malware Scanner - target: examples - seconds: 1.87 - files: 14 ├── scanned: 4 (.py) ├── skipped: 10 (.cfg, .md, .toml, .txt) └── suspicious: ├── examples/malicious/discordpydebug-0.0.4/setup.py │ └── <module> │ ├── archive compression │ └── package installation execution └── examples/malicious/discordpydebug-0.0.4/src/discordpydebug/__init__.py ├── <module> │ ├── process management │ ├── deserialization │ ├── system interaction │ └── user io ├── run │ └── fs linking ├── debug │ ├── fs linking │ └── archive compression └── runcommand └── process management => 👹 malicious 0.98 ``` ## PyPI Package Scanning malwi can directly scan PyPI packages without executing malicious logic, typically placed in `setup.py` or `__init__.py` files: ```bash malwi pypi requests ```` ``` __ __ .--------.---.-| .--.--.--|__| | | _ | | | | | | |__|__|__|___._|__|________|__| AI Python Malware Scanner - target: downloads/requests-2.32.4.tar - seconds: 3.10 - files: 84 ├── scanned: 34 └── skipped: 50 => 🟢 good ``` ## Python API malwi provides a comprehensive Python API for integrating malware detection into your applications. ### Quick Start ```python import malwi report = malwi.MalwiReport.create(input_path="suspicious_file.py") for obj in report.malicious_objects: print(f"File: {obj.file_path}") ``` ### `MalwiReport` ```python MalwiReport.create( input_path, # str or Path - file/directory to scan accepted_extensions=None, # List[str] - file extensions to scan (e.g., ['py', 'js']) silent=False, # bool - suppress progress messages malicious_threshold=0.7, # float - threshold for malicious classification (0.0-1.0) on_finding=None # callable - callback when malicious objects found ) -> MalwiReport # Returns: MalwiReport instance with scan results ``` ```python import malwi report = malwi.MalwiReport.create("suspicious_directory/") # Properties report.malicious # bool: True if malicious objects detected report.confidence # float: Overall confidence score (0.0-1.0) report.duration # float: Scan duration in seconds report.all_objects # List[MalwiObject]: All analyzed code objects report.malicious_objects # List[MalwiObject]: Objects exceeding threshold report.threshold # float: Maliciousness threshold used (0.0-1.0) report.all_files # List[Path]: All files found in input path report.skipped_files # List[Path]: Files skipped (wrong extension) report.processed_files # int: Number of files successfully processed report.activities # List[str]: Suspicious activities detected report.input_path # str: Original input path scanned report.start_time # str: ISO 8601 timestamp when scan started report.all_file_types # List[str]: All file extensions found report.version # str: Malwi version with model hash # Methods report.to_demo_text() # str: Human-readable tree summary report.to_json() # str: JSON formatted report report.to_yaml() # str: YAML formatted report report.to_markdown() # str: Markdown formatted report # Pre-load models to avoid delay on first prediction malwi.MalwiReport.load_models_into_memory() ``` ### `MalwiObject` ```python obj = report.all_objects[0] # Core properties obj.name # str: Function/class/module name obj.file_path # str: Path to source file obj.language # str: Programming language ('python'/'javascript') obj.maliciousness # float|None: ML confidence score (0.0-1.0) obj.warnings # List[str]: Compilation warnings/errors # Source code and AST compilation obj.file_source_code # str: Complete content of source file obj.source_code # str|None: Extracted source for this specific object obj.byte_code # List[Instruction]|None: Compiled AST bytecode obj.location # Tuple[int,int]|None: Start and end line numbers obj.embedding_count # int: Number of DistilBERT tokens (cached) # Analysis methods obj.predict() # dict: Run ML prediction and update maliciousness obj.to_tokens() # List[str]: Extract tokens for analysis obj.to_token_string() # str: Space-separated token string obj.to_string() # str: Bytecode as readable string obj.to_hash() # str: SHA256 hash of bytecode obj.to_dict() # dict: Serializable representation obj.to_yaml() # str: YAML formatted output obj.to_json() # str: JSON formatted output # Class methods MalwiObject.all_tokens(language="python") # List[str]: All possible tokens ``` ## Why malwi? Malicious actors are increasingly [targeting open-source projects](https://arxiv.org/pdf/2404.04991), introducing packages designed to compromise security. Common malicious behaviors include: - **Data exfiltration**: Theft of sensitive information such as credentials, API keys, or user data. - **Backdoors**: Unauthorized remote access to systems, enabling attackers to exploit vulnerabilities. - **Destructive actions**: Deliberate sabotage, including file deletion, database corruption, or application disruption. ## How does it work? malwi is based on the design of [_Zero Day Malware Detection with Alpha: Fast DBI with Transformer Models for Real World Application_ (2025)](https://arxiv.org/pdf/2504.14886v1). Imagine there is a function like: ```python def runcommand(value): output = subprocess.run(value, shell=True, capture_output=True) return [output.stdout, output.stderr] ``` ### 1. Files are compiled to create an Abstract Syntax Tree with [Tree-sitter](https://tree-sitter.github.io/tree-sitter/index.html) ``` module [0, 0] - [3, 0] function_definition [0, 0] - [2, 41] name: identifier [0, 4] - [0, 14] parameters: parameters [0, 14] - [0, 21] identifier [0, 15] - [0, 20] ... ``` ### 2. The AST is transpiled to dummy bytecode The bytecode is enhanced with security related instructions. ``` TARGETED_FILE PUSH_NULL LOAD_GLOBAL PROCESS_MANAGEMENT LOAD_ATTR run LOAD_PARAM value LOAD_CONST BOOLEAN LOAD_CONST BOOLEAN KW_NAMES shell capture_output CALL STRING_VERSION STORE_GLOBAL output LOAD_GLOBAL output LOAD_ATTR stdout LOAD_GLOBAL output LOAD_ATTR stderr BUILD_LIST STRING_VERSION RETURN_VALUE ``` ### 3. The bytecode is fed into a pre-trained [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert) A DistilBERT model trained on [malware-samples](https://github.com/schirrmacher/malwi-samples) is used to identify suspicious code patterns. ``` => Maliciousness: 0.98 ``` ## Benchmarks? ``` training_loss: 0.0110 epochs_completed: 3.0000 original_train_samples: 598540.0000 windowed_train_features: 831865.0000 original_validation_samples: 149636.0000 windowed_validation_features: 204781.0000 benign_samples_used: 734930.0000 malicious_samples_used: 13246.0000 benign_to_malicious_ratio: 60.0000 vocab_size: 30522.0000 max_length: 512.0000 window_stride: 128.0000 batch_size: 16.0000 eval_loss: 0.0107 eval_accuracy: 0.9980 eval_f1: 0.9521 eval_precision: 0.9832 eval_recall: 0.9229 eval_runtime: 115.5982 eval_samples_per_second: 1771.4900 eval_steps_per_second: 110.7200 epoch: 3.0000 ``` ## Contributing & Support - Found a bug or have a feature request? [Open an issue](https://github.com/schirrmacher/malwi/issues). - Do you have access to malicious packages in Rust, Go, or other languages? [Contact via GitHub profile](https://github.com/schirrmacher). - Struggling with false-positive findings? [Create a Pull-Request](https://github.com/schirrmacher/malwi-samples/pulls). ## Research ### Prerequisites 1. **Package Manager**: Install [uv](https://docs.astral.sh/uv/) for fast Python dependency management 2. **Training Data**: The research CLI will automatically clone [malwi-samples](https://github.com/schirrmacher/malwi-samples) when needed ### Quick Start ```bash # Install dependencies uv sync # Run tests uv run pytest tests # Train a model from scratch (full pipeline with automatic data download) ./research download preprocess train ``` #### Individual Pipeline Steps ```bash # 1. Download training data (clones malwi-samples + downloads repositories) ./research download # 2. Data preprocessing only (parallel processing, ~4 min on 32 cores) ./research preprocess --language python # 3. Model training only (tokenizer + DistilBERT, ~40 minutes on NVIDIA RTX 4090) ./research train ``` ## Limitations The malicious dataset includes some boilerplate functions, such as setup functions, which can also appear in benign code. These cause false positives during scans. The goal is to triage and reduce such false positives to improve malwi's accuracy. ## What's next? The first iteration focuses on **maliciousness of Python source code**. Future iterations will cover malware scanning for more languages (JavaScript, Rust, Go) and more formats (binaries, logs).
karinegabsschon/BERTopic_Environmental
karinegabsschon
2025-09-01T07:13:11Z
2
0
bertopic
[ "bertopic", "text-classification", "region:us" ]
text-classification
2025-07-07T16:53:11Z
--- tags: - bertopic library_name: bertopic pipeline_tag: text-classification --- # BERTopic_Environmental This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("karinegabsschon/BERTopic_Environmental") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 26 * Number of training documents: 905 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | electric - car - cars - charging - vehicles | 11 | -1_electric_car_cars_charging | | 0 | battery - batteries - lithium - catl - technology | 213 | 0_battery_batteries_lithium_catl | | 1 | byd - charging - dolphin - chinese - new | 61 | 1_byd_charging_dolphin_chinese | | 2 | charging - ev - chargers - ev charging - electric | 58 | 2_charging_ev_chargers_ev charging | | 3 | zero - government - uk - mandate - electric | 57 | 3_zero_government_uk_mandate | | 4 | electric - charging - points - france - car | 49 | 4_electric_charging_points_france | | 5 | battery - lithium - recycling - batteries - supply | 48 | 5_battery_lithium_recycling_batteries | | 6 | cars - combustion - study - electric - car | 36 | 6_cars_combustion_study_electric | | 7 | percent - cars - market - sales - vehicles | 33 | 7_percent_cars_market_sales | | 8 | fires - safety - battery - electric - cars | 29 | 8_fires_safety_battery_electric | | 9 | charging - electric - sweden - vehicles - circle | 29 | 9_charging_electric_sweden_vehicles | | 10 | tax - drivers - petrol - ev - rates | 25 | 10_tax_drivers_petrol_ev | | 11 | kia - car - model - electric - range | 25 | 11_kia_car_model_electric | | 12 | cent - car - petrol - evs - drivers | 23 | 12_cent_car_petrol_evs | | 13 | charging - stations - charging stations - charging points - points | 23 | 13_charging_stations_charging stations_charging points | | 14 | india - ev - green - mobility - electric | 23 | 14_india_ev_green_mobility | | 15 | indonesia - battery - lg - ev - ev battery | 20 | 15_indonesia_battery_lg_ev | | 16 | department - flames - police - car - tesla | 20 | 16_department_flames_police_car | | 17 | transport - ireland - council - ev - climate | 19 | 17_transport_ireland_council_ev | | 18 | toyota - electric - new - europe - hyundai | 19 | 18_toyota_electric_new_europe | | 19 | sales - new - electric - cent - car | 17 | 19_sales_new_electric_cent | | 20 | european - commission - eu - von - der | 15 | 20_european_commission_eu_von | | 21 | power - blackout - spain - homes - electricity | 14 | 21_power_blackout_spain_homes | | 22 | nissan - leaf - micra - new - generation | 13 | 22_nissan_leaf_micra_new | | 23 | ship - coast - vessel - coast guard - guard | 13 | 23_ship_coast_vessel_coast guard | | 24 | id - volkswagen - vw - every1 - id every1 | 12 | 24_id_volkswagen_vw_every1 | </details> ## Training hyperparameters * calculate_probabilities: False * language: None * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: None * seed_topic_list: None * top_n_words: 10 * verbose: True * zeroshot_min_similarity: 0.7 * zeroshot_topic_list: None ## Framework versions * Numpy: 2.0.2 * HDBSCAN: 0.8.40 * UMAP: 0.5.8 * Pandas: 2.2.2 * Scikit-Learn: 1.6.1 * Sentence-transformers: 4.1.0 * Transformers: 4.53.0 * Numba: 0.60.0 * Plotly: 5.24.1 * Python: 3.11.13
VoilaRaj/81_g_V5HwwQ
VoilaRaj
2025-09-01T07:12:39Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-01T07:12:05Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
akirafudo/blockassist-bc-keen_fast_giraffe_1756710612
akirafudo
2025-09-01T07:11:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T07:10:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
david3621/blockassist-bc-gentle_meek_cat_1756709671
david3621
2025-09-01T07:10:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle meek cat", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T07:09:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle meek cat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
karinegabsschon/BERTopic_Political
karinegabsschon
2025-09-01T07:09:04Z
2
0
bertopic
[ "bertopic", "text-classification", "region:us" ]
text-classification
2025-07-07T16:42:01Z
--- tags: - bertopic library_name: bertopic pipeline_tag: text-classification --- # BERTopic_Political This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("karinegabsschon/BERTopic_Political") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 20 * Number of training documents: 619 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | electric - tariffs - vehicles - ev - car | 11 | -1_electric_tariffs_vehicles_ev | | 0 | cars - spd - tax - electric - purchase | 97 | 0_cars_spd_tax_electric | | 1 | charging - chargers - public - ev - points | 87 | 1_charging_chargers_public_ev | | 2 | tax - car - new - electric - petrol | 72 | 2_tax_car_new_electric | | 3 | tesla - musk - elon - elon musk - trump | 53 | 3_tesla_musk_elon_elon musk | | 4 | moves - aid - electric - euros - plan | 49 | 4_moves_aid_electric_euros | | 5 | byd - chinese - china - price - price war | 36 | 5_byd_chinese_china_price | | 6 | targets - government - mandate - starmer - zero | 25 | 6_targets_government_mandate_starmer | | 7 | euros - bonus - ecological - ecological bonus - electric | 21 | 7_euros_bonus_ecological_ecological bonus | | 8 | california - trump - states - administration - electric | 21 | 8_california_trump_states_administration | | 9 | tariffs - united states - united - states - plant | 20 | 9_tariffs_united states_united_states | | 10 | ukraine - region - electric - ukrainian - vehicles | 18 | 10_ukraine_region_electric_ukrainian | | 11 | tesla - city - toronto - canadian - chow | 16 | 11_tesla_city_toronto_canadian | | 12 | eu - china - chinese - tariffs - minimum | 15 | 12_eu_china_chinese_tariffs | | 13 | chinese - defence - security - spying - military | 15 | 13_chinese_defence_security_spying | | 14 | european - eu - commission - industry - electric | 14 | 14_european_eu_commission_industry | | 15 | huf - businesses - subsidies - hungary - battery | 13 | 15_huf_businesses_subsidies_hungary | | 16 | cent - government - diesel - fleet - electric | 12 | 16_cent_government_diesel_fleet | | 17 | credit - tax - electric - vehicles - electric vehicles | 12 | 17_credit_tax_electric_vehicles | | 18 | british - trade - cars - government - tariffs | 12 | 18_british_trade_cars_government | </details> ## Training hyperparameters * calculate_probabilities: False * language: None * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: None * seed_topic_list: None * top_n_words: 10 * verbose: True * zeroshot_min_similarity: 0.7 * zeroshot_topic_list: None ## Framework versions * Numpy: 2.0.2 * HDBSCAN: 0.8.40 * UMAP: 0.5.8 * Pandas: 2.2.2 * Scikit-Learn: 1.6.1 * Sentence-transformers: 4.1.0 * Transformers: 4.53.0 * Numba: 0.60.0 * Plotly: 5.24.1 * Python: 3.11.13
vendi11/blockassist-bc-placid_placid_llama_1756710490
vendi11
2025-09-01T07:08:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid placid llama", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T07:08:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid placid llama --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hogensynoo/blockassist-bc-wary_darting_platypus_1756708124
hogensynoo
2025-09-01T06:28:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wary darting platypus", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T06:28:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wary darting platypus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/FoxCide-12B-Forgottenslop-Mell-i1-GGUF
mradermacher
2025-09-01T06:27:59Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-09-01T06:05:49Z
<!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/pot99rta/FoxCide-12B-Forgottenslop-Mell
outlookAi/Xg4E2wMoPV
outlookAi
2025-09-01T06:25:39Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-01T06:08:44Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Kaong --- # Xg4E2Wmopv <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Kaong ` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Kaong ", "lora_weights": "https://huggingface.co/outlookAi/Xg4E2wMoPV/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('outlookAi/Xg4E2wMoPV', weight_name='lora.safetensors') image = pipeline('Kaong ').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1200 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/outlookAi/Xg4E2wMoPV/discussions) to add images that show off what you’ve made with this LoRA.
the-acorn-ai/spiral-octothinker-8b-multi-three-games-step00160
the-acorn-ai
2025-09-01T06:24:45Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "spiral", "self-play", "reinforcement-learning", "octothinker", "multi-agent", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T06:23:59Z
--- base_model: OctoThinker-8B license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - spiral - self-play - reinforcement-learning - octothinker - multi-agent --- # SPIRAL OctoThinker-8B Multi-Agent Model This model was trained using the SPIRAL (Self-Play Iterative Reinforcement learning for Adaptation and Learning) framework. ## Model Details - **Base Model**: OctoAI/OctoThinker-8B - **Training Framework**: SPIRAL - **Checkpoint**: step_00160 - **Model Size**: 8B parameters - **Training Date**: 2025-08-31 ## Training Configuration The model was trained with self-play on multiple environments: - KuhnPoker-v1 - TicTacToe-v0 - SimpleNegotiation-v1 ### Training Parameters ```json { "learning_rate": "1e-6", "train_batch_size": 128, "num_ppo_epochs": 2, "temperature": 1.0, "max_model_len": 16384, "environments": [ "KuhnPoker-v1", "TicTacToe-v0", "SimpleNegotiation-v1" ], "base_model": "OctoThinker-8B", "framework": "SPIRAL" } ``` ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("the-acorn-ai/spiral-octothinker-8b-multi-three-games-step00160") model = AutoModelForCausalLM.from_pretrained( "the-acorn-ai/spiral-octothinker-8b-multi-three-games-step00160", torch_dtype=torch.bfloat16, device_map="auto" ) # Generate text inputs = tokenizer("Your prompt here", return_tensors="pt") outputs = model.generate(**inputs, max_length=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## License This model is licensed under the Apache License 2.0.
the-acorn-ai/spiral-octothinker-8b-multi-three-games-step00128
the-acorn-ai
2025-09-01T06:23:58Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "spiral", "self-play", "reinforcement-learning", "octothinker", "multi-agent", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T06:23:11Z
--- base_model: OctoThinker-8B license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - spiral - self-play - reinforcement-learning - octothinker - multi-agent --- # SPIRAL OctoThinker-8B Multi-Agent Model This model was trained using the SPIRAL (Self-Play Iterative Reinforcement learning for Adaptation and Learning) framework. ## Model Details - **Base Model**: OctoAI/OctoThinker-8B - **Training Framework**: SPIRAL - **Checkpoint**: step_00128 - **Model Size**: 8B parameters - **Training Date**: 2025-08-31 ## Training Configuration The model was trained with self-play on multiple environments: - KuhnPoker-v1 - TicTacToe-v0 - SimpleNegotiation-v1 ### Training Parameters ```json { "learning_rate": "1e-6", "train_batch_size": 128, "num_ppo_epochs": 2, "temperature": 1.0, "max_model_len": 16384, "environments": [ "KuhnPoker-v1", "TicTacToe-v0", "SimpleNegotiation-v1" ], "base_model": "OctoThinker-8B", "framework": "SPIRAL" } ``` ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("the-acorn-ai/spiral-octothinker-8b-multi-three-games-step00128") model = AutoModelForCausalLM.from_pretrained( "the-acorn-ai/spiral-octothinker-8b-multi-three-games-step00128", torch_dtype=torch.bfloat16, device_map="auto" ) # Generate text inputs = tokenizer("Your prompt here", return_tensors="pt") outputs = model.generate(**inputs, max_length=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## License This model is licensed under the Apache License 2.0.
the-acorn-ai/spiral-qwen-8b-khun-tictactoe-8k-step00224
the-acorn-ai
2025-09-01T06:22:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "spiral", "self-play", "reinforcement-learning", "multi-agent", "conversational", "en", "base_model:Qwen/Qwen3-8B-Base", "base_model:finetune:Qwen/Qwen3-8B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T06:22:13Z
--- base_model: Qwen/Qwen3-8B-Base license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - spiral - self-play - reinforcement-learning - qwen3 - multi-agent --- # SPIRAL Qwen3-8B Multi-Agent Model This model was trained using the SPIRAL (Self-Play Iterative Reinforcement learning for Adaptation and Learning) framework. ## Model Details - **Base Model**: Qwen/Qwen3-8B-Base - **Training Framework**: SPIRAL - **Checkpoint**: step_00224 - **Model Size**: 8B parameters - **Training Date**: 2025-08-31 ## Training Configuration The model was trained with self-play on multiple environments: - KuhnPoker-v1 - TicTacToe-v0 - SimpleNegotiation-v1 ### Training Parameters ```json { "learning_rate": "1e-6", "train_batch_size": 128, "num_ppo_epochs": 2, "temperature": 1.0, "max_model_len": 16384, "environments": [ "KuhnPoker-v1", "TicTacToe-v0", "SimpleNegotiation-v1" ], "base_model": "Qwen/Qwen3-8B-Base", "framework": "SPIRAL" } ``` ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("the-acorn-ai/spiral-qwen-8b-khun-tictactoe-8k-step00224") model = AutoModelForCausalLM.from_pretrained( "the-acorn-ai/spiral-qwen-8b-khun-tictactoe-8k-step00224", torch_dtype=torch.bfloat16, device_map="auto" ) # Generate text inputs = tokenizer("Your prompt here", return_tensors="pt") outputs = model.generate(**inputs, max_length=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## License This model is licensed under the Apache License 2.0.
0xlich/task-13-Qwen-Qwen2.5-1.5B-Instruct
0xlich
2025-09-01T06:22:44Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct", "region:us" ]
null
2025-09-01T04:34:48Z
--- base_model: Qwen/Qwen2.5-1.5B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
Bingham/qwen_2_5_grpo_11_train_unsloth_model
Bingham
2025-09-01T06:20:55Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-26T19:28:04Z
--- base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Bingham - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
binhpdt/reproduced-gliner-medium
binhpdt
2025-09-01T06:20:22Z
0
0
null
[ "pytorch", "base_model:urchade/gliner_base", "base_model:finetune:urchade/gliner_base", "license:apache-2.0", "region:us" ]
null
2025-09-01T06:05:20Z
--- license: apache-2.0 base_model: - urchade/gliner_base --- Reproducing training the original model of GLiNER for research purpose. The model is trained with author's hyper-params, the batch size is 8 and the step is 30k.
mradermacher/Epstein-i1-GGUF
mradermacher
2025-09-01T06:20:15Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Pclanglais/Epstein", "base_model:quantized:Pclanglais/Epstein", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-09-01T05:12:36Z
--- base_model: Pclanglais/Epstein language: - en library_name: transformers license: cc-by-sa-4.0 mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/Pclanglais/Epstein <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Epstein-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Epstein-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-Q4_1.gguf) | i1-Q4_1 | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Epstein-i1-GGUF/resolve/main/Epstein.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/Epstein-GGUF
mradermacher
2025-09-01T06:12:15Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Pclanglais/Epstein", "base_model:quantized:Pclanglais/Epstein", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
null
2025-08-31T15:46:40Z
--- base_model: Pclanglais/Epstein language: - en library_name: transformers license: cc-by-sa-4.0 mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Pclanglais/Epstein <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Epstein-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Epstein-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Epstein-GGUF/resolve/main/Epstein.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/Epstein-GGUF/resolve/main/Epstein.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Epstein-GGUF/resolve/main/Epstein.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Epstein-GGUF/resolve/main/Epstein.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Epstein-GGUF/resolve/main/Epstein.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Epstein-GGUF/resolve/main/Epstein.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Epstein-GGUF/resolve/main/Epstein.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Epstein-GGUF/resolve/main/Epstein.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/Epstein-GGUF/resolve/main/Epstein.Q5_K_M.gguf) | Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Epstein-GGUF/resolve/main/Epstein.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Epstein-GGUF/resolve/main/Epstein.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
LarryAIDraw/checkpoint-e18_s882
LarryAIDraw
2025-09-01T06:11:12Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-09-01T06:07:55Z
--- license: creativeml-openrail-m --- https://civitai.com/models/1908769/augusta-wuthering-waves
kejuss/blockassist-bc-timid_voracious_gecko_1756706872
kejuss
2025-09-01T06:08:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "timid voracious gecko", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T06:08:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - timid voracious gecko --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756705660
Sayemahsjn
2025-09-01T06:06:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T06:06:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbkts/blockassist-bc-keen_fast_giraffe_1756706590
omerbkts
2025-09-01T06:03:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T06:03:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
klmdr22/blockassist-bc-wild_loud_newt_1756706555
klmdr22
2025-09-01T06:03:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wild loud newt", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T06:03:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wild loud newt --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
samunder12/llama-3.1-8b-roleplay-lora
samunder12
2025-09-01T06:01:47Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-01T06:00:18Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** samunder12 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756703841
Sayemahsjn
2025-09-01T05:36:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T05:36:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
acidjp/blockassist-bc-pesty_extinct_prawn_1756701893
acidjp
2025-09-01T05:29:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty extinct prawn", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T05:29:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty extinct prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
klmdr22/blockassist-bc-wild_loud_newt_1756703641
klmdr22
2025-09-01T05:14:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wild loud newt", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T05:14:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wild loud newt --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
CodeAtCMU/Llama-3.2-1B-GenerativePerturbations_full_sft_code_data_120K_step_by_step
CodeAtCMU
2025-09-01T05:00:55Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T05:00:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aXsalll/blockassist-bc-chattering_galloping_ape_1756702593
aXsalll
2025-09-01T04:57:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "chattering galloping ape", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T04:56:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - chattering galloping ape --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1756700352
koloni
2025-09-01T04:44:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T04:44:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
v2ray/nai-lora-heavy-line
v2ray
2025-09-01T04:44:57Z
0
0
peft
[ "peft", "art", "text-to-image", "en", "base_model:Laxhar/noobai-xl-EarlyAccess", "base_model:adapter:Laxhar/noobai-xl-EarlyAccess", "license:mit", "region:us" ]
text-to-image
2025-08-31T06:01:48Z
--- license: mit language: - en base_model: - Laxhar/sdxl_noob pipeline_tag: text-to-image tags: - art library_name: peft --- # NoobAI XL LoRA Heavy Line This LoRA is trained for 2 models, [heavy-line.safetensors](https://huggingface.co/v2ray/nai-lora-heavy-line/resolve/main/heavy-line.safetensors) for [v1.1 version of NoobAI XL](https://civitai.com/models/833294?modelVersionId=1116447), and [heavy-line-mmh.safetensors](https://huggingface.co/v2ray/nai-lora-heavy-line/resolve/main/heavy-line-mmh.safetensors) for [Vpred 1.1 version of MiaoMiao Harem](https://civitai.com/models/934764?modelVersionId=1690053). The dataset used to train this LoRA is scraped using [LagPixelLOL/aisp](https://github.com/LagPixelLOL/aisp), containing a total of 578 images, a total of 3 artists are used. Big thanks to the artists for the very cute styles :3. To use this LoRA, you can go without a trigger word, which will use all 3 artists' style together, or you can choose to specify which artist's style with a trigger word, note this model is mostly a foot model. \ pixiv [@くろやくそく](https://www.pixiv.net/users/6478220): `hei yksk` \ pixiv [@leonzo030](https://www.pixiv.net/users/13765232): `leonzo` \ pixiv [@フリザ](https://www.pixiv.net/users/67904089): `efreezerarts` This LoRA is trained using [kohya-ss/sd-scripts](https://github.com/kohya-ss/sd-scripts), with rank 32, alpha 16, learning rate 1e-4, for 192 epochs with a total of 5184 steps, using a B200, took approximately 6 hours. If you have any questions, suggestions, or just want to talk to me, you can add me on Discord with ID [@v2ray](https://discord.gg/r4Wj97nZ). ## Examples ![](https://huggingface.co/v2ray/nai-lora-heavy-line/resolve/main/examples/0.avif) ![](https://huggingface.co/v2ray/nai-lora-heavy-line/resolve/main/examples/1.avif) ![](https://huggingface.co/v2ray/nai-lora-heavy-line/resolve/main/examples/2.avif)
yujiepan/longcat-flash-tiny-random
yujiepan
2025-09-01T04:36:42Z
0
0
transformers
[ "transformers", "safetensors", "longcat_flash", "text-generation", "conversational", "custom_code", "base_model:meituan-longcat/LongCat-Flash-Chat", "base_model:finetune:meituan-longcat/LongCat-Flash-Chat", "autotrain_compatible", "region:us" ]
text-generation
2025-09-01T04:36:39Z
--- library_name: transformers pipeline_tag: text-generation inference: true widget: - text: Hello! example_title: Hello world group: Python base_model: - meituan-longcat/LongCat-Flash-Chat --- This tiny model is for debugging. It is randomly initialized with the config adapted from [meituan-longcat/LongCat-Flash-Chat](https://huggingface.co/meituan-longcat/LongCat-Flash-Chat). ### Example usage: - vLLM ```bash vllm serve yujiepan/longcat-flash-tiny-random \ --trust-remote-code \ --enable-expert-parallel \ --tensor-parallel-size 1 \ --speculative_config '{"model": "yujiepan/longcat-flash-tiny-random", "num_speculative_tokens": 1, "method":"longcat_flash_mtp"}' ``` - SGLang ```bash python3 -m sglang.launch_server \ --model yujiepan/longcat-flash-tiny-random \ --trust-remote-code \ --attention-backend flashinfer \ --enable-ep-moe \ --tp 1 \ --speculative-draft-model-path yujiepan/longcat-flash-tiny-random \ --speculative-algorithm NEXTN \ --speculative-num-draft-tokens 2 \ --speculative-num-steps 1 \ --speculative-eagle-topk 1 ``` - Transformers ```python import torch import transformers model_id = "yujiepan/longcat-flash-tiny-random" pipe = transformers.pipelines.pipeline( 'text-generation', model=model_id, trust_remote_code=True, device_map='cuda', torch_dtype=torch.bfloat16, ) past_key_values = transformers.DynamicCache(config=None) # set config to None r = pipe('Hello, world!', past_key_values=past_key_values, max_new_tokens=32) print(r) ``` ### Codes to create this repo: ```python import json from copy import deepcopy from pathlib import Path import torch import torch.nn as nn from huggingface_hub import file_exists, hf_hub_download from transformers import ( AutoConfig, AutoModelForCausalLM, AutoProcessor, AutoTokenizer, GenerationConfig, set_seed, ) from transformers.models.glm4_moe.modeling_glm4_moe import Glm4MoeRMSNorm source_model_id = "meituan-longcat/LongCat-Flash-Chat" save_folder = "/tmp/yujiepan/longcat-flash-tiny-random" Path(save_folder).mkdir(parents=True, exist_ok=True) tokenizer = AutoTokenizer.from_pretrained(source_model_id, trust_remote_code=True) tokenizer.save_pretrained(save_folder) with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f: config_json = json.load(f) for k, v in config_json['auto_map'].items(): config_json['auto_map'][k] = f'{source_model_id}--{v}' config_json.update({ 'num_layers': 2, 'hidden_size': 8, 'ffn_hidden_size': 64, 'expert_ffn_hidden_size': 64, 'num_attention_heads': 4, 'kv_lora_rank': 384, 'n_routed_experts': 32, 'q_lora_rank': 32, 'qk_nope_head_dim': 64, 'qk_rope_head_dim': 192, # vllm mla kernel supports 576 only, FA supports head dim <= 256 'v_head_dim': 64, 'moe_topk': 12, 'zero_expert_num': 16, }) # del config_json['quantization_config'] with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f: json.dump(config_json, f, indent=2) config = AutoConfig.from_pretrained( save_folder, trust_remote_code=True, ) print(config) torch.set_default_dtype(torch.bfloat16) model = AutoModelForCausalLM.from_config(config, trust_remote_code=True) if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'): model.generation_config = GenerationConfig.from_pretrained( source_model_id, trust_remote_code=True, ) model = model.cpu() # MTP model.model.mtp = nn.ModuleDict({ "layers": nn.ModuleList([nn.ModuleDict(dict( eh_proj=nn.Linear(config.hidden_size * 2, config.hidden_size, bias=False), enorm=nn.ModuleDict({"m": nn.RMSNorm(config.hidden_size)}), hnorm=nn.ModuleDict({"m": nn.RMSNorm(config.hidden_size)}), input_layernorm=nn.RMSNorm(config.hidden_size), post_attention_layernorm=nn.RMSNorm(config.hidden_size), self_attn=deepcopy(model.model.layers[0].self_attn[0]), transformer_layer=nn.ModuleDict({"mlp": deepcopy(model.model.layers[0].mlps[0])}), ))]), "norm": nn.RMSNorm(config.hidden_size), }) for i in range(config.num_layers): model.model.layers[i].mlp.router = model.model.layers[i].mlp.router.float() # model.model.layers[i].mlp.router.e_score_correction_bias = torch.zeros((config.n_routed_experts + config.zero_expert_num)).float() set_seed(42) with torch.no_grad(): for name, p in sorted(model.named_parameters()): torch.nn.init.normal_(p, 0, 0.1) print(name, p.shape, p.dtype) model.model.mtp.embed_tokens = deepcopy(model.model.embed_tokens) model.save_pretrained(save_folder) torch.set_default_dtype(torch.float32) for n, m in model.named_modules(): if 'LongcatFlashMLA' in str(type(m)): print(n, m.layer_idx) with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f: config_json = json.load(f) config_json['auto_map'] = {k: v.split('--')[-1] for k, v in config_json['auto_map'].items()} with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f: json.dump(config_json, f, indent=2) ``` ### Printing the model: ```text LongcatFlashForCausalLM( (model): LongcatFlashModel( (embed_tokens): Embedding(131072, 8) (layers): ModuleList( (0-1): 2 x LongcatFlashDecoderLayer( (mlp): LongcatFlashMoE( (experts): ModuleList( (0-31): 32 x LongcatFlashMLP( (gate_proj): Linear(in_features=8, out_features=64, bias=False) (up_proj): Linear(in_features=8, out_features=64, bias=False) (down_proj): Linear(in_features=64, out_features=8, bias=False) (act_fn): SiLU() ) ) (router): LongcatFlashTopkRouter( (classifier): Linear(in_features=8, out_features=48, bias=False) ) ) (self_attn): ModuleList( (0-1): 2 x LongcatFlashMLA( (q_a_proj): Linear(in_features=8, out_features=32, bias=False) (q_a_layernorm): LongcatFlashRMSNorm((32,), eps=1e-06) (q_b_proj): Linear(in_features=32, out_features=1024, bias=False) (kv_a_proj_with_mqa): Linear(in_features=8, out_features=576, bias=False) (kv_a_layernorm): LongcatFlashRMSNorm((384,), eps=1e-06) (kv_b_proj): Linear(in_features=384, out_features=512, bias=False) (o_proj): Linear(in_features=256, out_features=8, bias=False) ) ) (mlps): ModuleList( (0-1): 2 x LongcatFlashMLP( (gate_proj): Linear(in_features=8, out_features=64, bias=False) (up_proj): Linear(in_features=8, out_features=64, bias=False) (down_proj): Linear(in_features=64, out_features=8, bias=False) (act_fn): SiLU() ) ) (input_layernorm): ModuleList( (0-1): 2 x LongcatFlashRMSNorm((8,), eps=1e-05) ) (post_attention_layernorm): ModuleList( (0-1): 2 x LongcatFlashRMSNorm((8,), eps=1e-05) ) ) ) (norm): LongcatFlashRMSNorm((8,), eps=1e-05) (rotary_emb): LongcatFlashRotaryEmbedding() (mtp): ModuleDict( (layers): ModuleList( (0): ModuleDict( (eh_proj): Linear(in_features=16, out_features=8, bias=False) (enorm): ModuleDict( (m): RMSNorm((8,), eps=None, elementwise_affine=True) ) (hnorm): ModuleDict( (m): RMSNorm((8,), eps=None, elementwise_affine=True) ) (input_layernorm): RMSNorm((8,), eps=None, elementwise_affine=True) (post_attention_layernorm): RMSNorm((8,), eps=None, elementwise_affine=True) (self_attn): LongcatFlashMLA( (q_a_proj): Linear(in_features=8, out_features=32, bias=False) (q_a_layernorm): LongcatFlashRMSNorm((32,), eps=1e-06) (q_b_proj): Linear(in_features=32, out_features=1024, bias=False) (kv_a_proj_with_mqa): Linear(in_features=8, out_features=576, bias=False) (kv_a_layernorm): LongcatFlashRMSNorm((384,), eps=1e-06) (kv_b_proj): Linear(in_features=384, out_features=512, bias=False) (o_proj): Linear(in_features=256, out_features=8, bias=False) ) (transformer_layer): ModuleDict( (mlp): LongcatFlashMLP( (gate_proj): Linear(in_features=8, out_features=64, bias=False) (up_proj): Linear(in_features=8, out_features=64, bias=False) (down_proj): Linear(in_features=64, out_features=8, bias=False) (act_fn): SiLU() ) ) ) ) (norm): RMSNorm((8,), eps=None, elementwise_affine=True) (embed_tokens): Embedding(131072, 8) ) ) (lm_head): Linear(in_features=8, out_features=131072, bias=False) ) ```
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756698816
Loder-S
2025-09-01T04:21:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sprightly knobby tiger", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T04:21:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sprightly knobby tiger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
wjkim9653/llama-3.2-3b-instruct-ldi-clinic-base-rlaif-rlhf
wjkim9653
2025-09-01T04:19:06Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T04:11:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
liukevin666/blockassist-bc-yawning_striped_cassowary_1756700135
liukevin666
2025-09-01T04:16:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T04:16:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-beaked_lumbering_cockroach_1756700128
AnerYubo
2025-09-01T04:15:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "beaked lumbering cockroach", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T04:15:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - beaked lumbering cockroach --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756699837
akirafudo
2025-09-01T04:10:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T04:10:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
samairtimer/gemma-3-270m-it-blr-slang
samairtimer
2025-09-01T04:08:50Z
0
1
transformers
[ "transformers", "tensorboard", "safetensors", "gguf", "gemma3_text", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:google/gemma-3-270m-it", "base_model:quantized:google/gemma-3-270m-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T07:28:18Z
--- base_model: google/gemma-3-270m-it library_name: transformers model_name: gemma-3-270m-it-blr-slang tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-3-270m-it-blr-slang This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="samairtimer/gemma-3-270m-it-blr-slang", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.22.1 - Transformers: 4.55.4 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
pjngth998/lora-datasetv02-Llama-3.1-8B-customer-service-chatbot
pjngth998
2025-09-01T03:59:18Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:meta-llama/Meta-Llama-3.1-8B-Instruct", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "conversational", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:adapter:meta-llama/Llama-3.1-8B-Instruct", "region:us" ]
text-generation
2025-09-01T03:50:09Z
--- base_model: meta-llama/Meta-Llama-3.1-8B-Instruct library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:meta-llama/Meta-Llama-3.1-8B-Instruct - lora - sft - transformers - trl - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.0
AppliedLucent/ALIE-1.2-8B
AppliedLucent
2025-09-01T03:57:57Z
44
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:AppliedLucent/ALIE-1.2-8B", "base_model:finetune:AppliedLucent/ALIE-1.2-8B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-30T19:32:48Z
--- base_model: AppliedLucent/ALIE-1.2-8B tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** AppliedLucent - **License:** apache-2.0 - **Finetuned from model :** AppliedLucent/ALIE-1.2-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1756696480
rvipitkirubbe
2025-09-01T03:41:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mottled foraging ape", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T03:41:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mottled foraging ape --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ymatari/act_so101_cleanup_table_4
ymatari
2025-09-01T03:34:57Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:ymatari/cleanup-table-2", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-09-01T03:34:27Z
--- datasets: ymatari/cleanup-table-2 library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - robotics - lerobot - act --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
sekirr/blockassist-bc-masked_tenacious_whale_1756697640
sekirr
2025-09-01T03:34:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "masked tenacious whale", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T03:34:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - masked tenacious whale --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756697234
akirafudo
2025-09-01T03:27:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T03:27:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
2hpsatt/blockassist-bc-huge_deft_eagle_1756696480
2hpsatt
2025-09-01T03:15:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "huge deft eagle", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T03:15:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - huge deft eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kznmp3/blockassist-bc-lively_raging_hippo_1756695931
kznmp3
2025-09-01T03:06:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lively raging hippo", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T03:06:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lively raging hippo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756695925
akirafudo
2025-09-01T03:05:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T03:05:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
danuphat/typhoon-ocr-7b-5-down-ep-3
danuphat
2025-09-01T03:03:33Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:scb10x/typhoon-ocr-7b", "base_model:finetune:scb10x/typhoon-ocr-7b", "endpoints_compatible", "region:us" ]
null
2025-09-01T02:01:27Z
--- base_model: scb10x/typhoon-ocr-7b library_name: transformers model_name: typhoon-ocr-7b-5-down-ep-3 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for typhoon-ocr-7b-5-down-ep-3 This model is a fine-tuned version of [scb10x/typhoon-ocr-7b](https://huggingface.co/scb10x/typhoon-ocr-7b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="danuphat/typhoon-ocr-7b-5-down-ep-3", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/danuphat-l-kasetsart-university/typhoon-ocr-7b-add-data-1/runs/0v5mykn1) This model was trained with SFT. ### Framework versions - TRL: 0.22.0.dev0 - Transformers: 4.56.0.dev0 - Pytorch: 2.7.1 - Datasets: 4.0.0 - Tokenizers: 0.21.2 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
giovannidemuri/llama8b-er-v519-seed2-hx
giovannidemuri
2025-09-01T02:54:50Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T01:11:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
akirafudo/blockassist-bc-keen_fast_giraffe_1756694775
akirafudo
2025-09-01T02:47:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T02:46:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
GroomerG/blockassist-bc-vicious_pawing_badger_1756693228
GroomerG
2025-09-01T02:41:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vicious pawing badger", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T02:41:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vicious pawing badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
NahedDom/blockassist-bc-flapping_stocky_leopard_1756692306
NahedDom
2025-09-01T02:38:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping stocky leopard", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T02:38:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping stocky leopard --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
maxibillion1975/blockassist-bc-iridescent_squeaky_sandpiper_1756692535
maxibillion1975
2025-09-01T02:36:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "iridescent squeaky sandpiper", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T02:35:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - iridescent squeaky sandpiper --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kalimoy/blockassist-bc-playful_huge_nightingale_1756693352
kalimoy
2025-09-01T02:23:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful huge nightingale", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T02:22:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful huge nightingale --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kalimoy/blockassist-bc-soft_curious_camel_1756692448
kalimoy
2025-09-01T02:08:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "soft curious camel", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T02:07:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - soft curious camel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sekirr/blockassist-bc-masked_tenacious_whale_1756692165
sekirr
2025-09-01T02:03:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "masked tenacious whale", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T02:03:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - masked tenacious whale --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jiarr/Qwen3-0.6B-Gensyn-Swarm-plump_burrowing_capybara
jiarr
2025-09-01T02:02:04Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am plump_burrowing_capybara", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T01:58:30Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am plump_burrowing_capybara --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
akirafudo/blockassist-bc-keen_fast_giraffe_1756691753
akirafudo
2025-09-01T01:56:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T01:56:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1756690472
bah63843
2025-09-01T01:35:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T01:35:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vendi11/blockassist-bc-placid_placid_llama_1756689725
vendi11
2025-09-01T01:22:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid placid llama", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T01:22:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid placid llama --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lemonhat/Qwen2.5-7B-Instruct-NEW3_t1_50k_v2_tag5_filtered_hermes
lemonhat
2025-09-01T01:21:50Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T01:20:29Z
--- library_name: transformers license: other base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: NEW3_t1_50k_v2_tag5_filtered_hermes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEW3_t1_50k_v2_tag5_filtered_hermes This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the NEW3_t1_50k_v2_tag5_filtered_hermes dataset. It achieves the following results on the evaluation set: - Loss: 0.1799 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 4 - total_eval_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.2518 | 0.0628 | 100 | 0.2473 | | 0.2409 | 0.1255 | 200 | 0.2294 | | 0.2612 | 0.1883 | 300 | 0.2220 | | 0.1955 | 0.2511 | 400 | 0.2177 | | 0.2403 | 0.3139 | 500 | 0.2129 | | 0.2477 | 0.3766 | 600 | 0.2118 | | 0.1885 | 0.4394 | 700 | 0.2045 | | 0.1904 | 0.5022 | 800 | 0.2051 | | 0.2349 | 0.5650 | 900 | 0.1997 | | 0.2077 | 0.6277 | 1000 | 0.1944 | | 0.1978 | 0.6905 | 1100 | 0.1921 | | 0.21 | 0.7533 | 1200 | 0.1960 | | 0.2057 | 0.8161 | 1300 | 0.1938 | | 0.1966 | 0.8788 | 1400 | 0.1910 | | 0.2953 | 0.9416 | 1500 | 0.1890 | | 0.1847 | 1.0044 | 1600 | 0.1881 | | 0.2031 | 1.0672 | 1700 | 0.1892 | | 0.1982 | 1.1299 | 1800 | 0.1861 | | 0.1926 | 1.1927 | 1900 | 0.1846 | | 0.1627 | 1.2555 | 2000 | 0.1835 | | 0.1849 | 1.3183 | 2100 | 0.1834 | | 0.2375 | 1.3810 | 2200 | 0.1826 | | 0.1617 | 1.4438 | 2300 | 0.1827 | | 0.1851 | 1.5066 | 2400 | 0.1816 | | 0.2603 | 1.5694 | 2500 | 0.1829 | | 0.1864 | 1.6321 | 2600 | 0.1824 | | 0.1699 | 1.6949 | 2700 | 0.1808 | | 0.1743 | 1.7577 | 2800 | 0.1801 | | 0.1735 | 1.8205 | 2900 | 0.1801 | | 0.2142 | 1.8832 | 3000 | 0.1798 | | 0.1628 | 1.9460 | 3100 | 0.1797 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
Andra76/blockassist-bc-deadly_enormous_butterfly_1756688920
Andra76
2025-09-01T01:19:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly enormous butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T01:18:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly enormous butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
seraphimzzzz/1661004
seraphimzzzz
2025-09-01T01:19:18Z
0
0
null
[ "region:us" ]
null
2025-09-01T01:19:15Z
[View on Civ Archive](https://civarchive.com/models/1555551?modelVersionId=1760268)
bah63843/blockassist-bc-plump_fast_antelope_1756689151
bah63843
2025-09-01T01:13:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T01:13:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sapie-model/sapie-guarian-fp8
sapie-model
2025-09-01T01:09:39Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "vllm", "vision", "fp8", "conversational", "en", "base_model:google/gemma-3-27b-it", "base_model:quantized:google/gemma-3-27b-it", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "compressed-tensors", "region:us" ]
image-text-to-text
2025-09-01T01:05:42Z
--- tags: - vllm - vision - fp8 license: apache-2.0 license_link: >- https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md language: - en base_model: google/gemma-3-27b-it library_name: transformers --- # gemma-3-27b-it-FP8-Dynamic ## Model Overview - **Model Architecture:** gemma-3-27b-it - **Input:** Vision-Text - **Output:** Text - **Model Optimizations:** - **Weight quantization:** FP8 - **Activation quantization:** FP8 - **Release Date:** 2/24/2025 - **Version:** 1.0 - **Model Developers:** Neural Magic Quantized version of [google/gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it). ### Model Optimizations This model was obtained by quantizing the weights of [google/gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it) to FP8 data type, ready for inference with vLLM >= 0.5.2. ## Deployment ### Use with vLLM This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below. ```python from vllm import LLM, SamplingParams from vllm.assets.image import ImageAsset from transformers import AutoProcessor # Define model name once model_name = "RedHatAI/gemma-3-27b-it-FP8-dynamic" # Load image and processor image = ImageAsset("cherry_blossom").pil_image.convert("RGB") processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True) # Build multimodal prompt chat = [ {"role": "user", "content": [{"type": "image"}, {"type": "text", "text": "What is the content of this image?"}]}, {"role": "assistant", "content": []} ] prompt = processor.apply_chat_template(chat, add_generation_prompt=True) # Initialize model llm = LLM(model=model_name, trust_remote_code=True) # Run inference inputs = {"prompt": prompt, "multi_modal_data": {"image": [image]}} outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64)) # Display result print("RESPONSE:", outputs[0].outputs[0].text) ``` vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. ## Creation This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog. <details> <summary>Model Creation Code</summary> ```python import requests import torch from PIL import Image from transformers import AutoProcessor, Gemma3ForConditionalGeneration from llmcompressor.transformers import oneshot from llmcompressor.modifiers.quantization import QuantizationModifier # Load model. model_id = google/gemma-3-27b-it model = Gemma3ForConditionalGeneration.from_pretrained( model_id, device_map="auto", torch_dtype="auto" ) processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True) # Recipe recipe = [ QuantizationModifier( targets="Linear", scheme="FP8_DYNAMIC", sequential_targets=["Gemma3DecoderLayer"], ignore=["re:.*lm_head", "re:vision_tower.*", "re:multi_modal_projector.*"], ), ] SAVE_DIR=f"{model_id.split('/')[1]}-FP8-Dynamic" # Perform oneshot oneshot( model=model, recipe=recipe, trust_remote_code_model=True, output_dir=SAVE_DIR ) ``` </details> ## Evaluation The model was evaluated using [lm_evaluation_harness](https://github.com/neuralmagic/lm-evaluation-harness) for OpenLLM v1 text benchmark. The evaluations were conducted using the following commands: <details> <summary>Evaluation Commands</summary> ### OpenLLM v1 ``` lm_eval \ --model vllm \ --model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True,enforce_eager=True \ --tasks openllm \ --batch_size auto ``` </details> ### Accuracy <table> <thead> <tr> <th>Category</th> <th>Metric</th> <th>google/gemma-3-27b-it</th> <th>RedHatAI/gemma-3-27b-it-FP8-Dynamic</th> <th>Recovery (%)</th> </tr> </thead> <tbody> <tr> <td rowspan="7"><b>OpenLLM V1</b></td> <td>ARC Challenge</td> <td>72.53%</td> <td>72.70%</td> <td>100.24%</td> </tr> <tr> <td>GSM8K</td> <td>92.12%</td> <td>91.51%</td> <td>99.34%</td> </tr> <tr> <td>Hellaswag</td> <td>85.78%</td> <td>85.69%</td> <td>99.90%</td> </tr> <tr> <td>MMLU</td> <td>77.53%</td> <td>77.45%</td> <td>99.89%</td> </tr> <tr> <td>Truthfulqa (mc2)</td> <td>62.20%</td> <td>62.20%</td> <td>99.99%</td> </tr> <tr> <td>Winogrande</td> <td>79.40%</td> <td>78.77%</td> <td>99.20%</td> </tr> <tr> <td><b>Average Score</b></td> <td><b>78.26%</b></td> <td><b>78.05%</b></td> <td><b>99.73%</b></td> </tr> <tr> <td rowspan="3"><b>Vision Evals</b></td> <td>MMMU (val)</td> <td>50.89%</td> <td>51.00%</td> <td>100.22%</td> </tr> <tr> <td>ChartQA</td> <td>72.16%</td> <td>72.16%</td> <td>100.0%</td> </tr> <tr> <td><b>Average Score</b></td> <td><b>61.53%</b></td> <td><b>61.58%</b></td> <td><b>100.11%%</b></td> </tr> </tbody> </table>
bah63843/blockassist-bc-plump_fast_antelope_1756688772
bah63843
2025-09-01T01:07:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T01:06:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1756688431
bah63843
2025-09-01T01:01:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T01:01:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
phamff/vietnamese-legal-lora-adapter
phamff
2025-09-01T00:57:06Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "vietnamese", "legal", "qa", "lora", "vi", "base_model:1TuanPham/T-VisStar-7B-v0.1", "base_model:adapter:1TuanPham/T-VisStar-7B-v0.1", "region:us" ]
null
2025-09-01T00:56:38Z
--- library_name: peft base_model: 1TuanPham/T-VisStar-7B-v0.1 tags: - vietnamese - legal - qa - lora language: vi --- # Vietnamese Legal QA LoRA Adapter LoRA adapter for Vietnamese legal Q&A, trained on `1TuanPham/T-VisStar-7B-v0.1`. ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel import torch # Load base model base_model = AutoModelForCausalLM.from_pretrained( "1TuanPham/T-VisStar-7B-v0.1", torch_dtype=torch.bfloat16, device_map="auto" ) # Load LoRA adapter model = PeftModel.from_pretrained(base_model, "phamff/vietnamese-legal-lora-adapter") tokenizer = AutoTokenizer.from_pretrained("1TuanPham/T-VisStar-7B-v0.1") # Generate question = "Quyền và nghĩa vụ của công dân là gì?" prompt = f"<|user|>\n{question}\n<|assistant|>\n" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=512, temperature=0.7) answer = tokenizer.decode(outputs[0], skip_special_tokens=True) print(answer.split("<|assistant|>\n")[-1]) ``` Trained: 2025-09-01
haider-shah-viral-videos-35-second-Video/New.full.videos.haider.shah.Viral.Video.Official.Tutorial
haider-shah-viral-videos-35-second-Video
2025-09-01T00:56:13Z
0
0
null
[ "region:us" ]
null
2025-09-01T00:56:03Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
elmenbillion/blockassist-bc-beaked_sharp_otter_1756684748
elmenbillion
2025-09-01T00:26:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "beaked sharp otter", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T00:25:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - beaked sharp otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/llama3-diverce-ver1.6-i1-GGUF
mradermacher
2025-09-01T00:21:05Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:sel303/llama3-diverce-ver1.6", "base_model:quantized:sel303/llama3-diverce-ver1.6", "license:llama3", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-31T23:40:30Z
--- base_model: sel303/llama3-diverce-ver1.6 language: - en library_name: transformers license: llama3 mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/sel303/llama3-diverce-ver1.6 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#llama3-diverce-ver1.6-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/llama3-diverce-ver1.6-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.6-i1-GGUF/resolve/main/llama3-diverce-ver1.6.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
liukevin666/blockassist-bc-yawning_striped_cassowary_1756685260
liukevin666
2025-09-01T00:09:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T00:08:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
barchimnases/blockassist-bc-sedate_masked_spider_1756684223
barchimnases
2025-08-31T23:51:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sedate masked spider", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T23:50:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sedate masked spider --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
golopper/blockassist-bc-sneaky_howling_eagle_1756681538
golopper
2025-08-31T23:06:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sneaky howling eagle", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T23:05:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sneaky howling eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jouuer/blockassist-bc-eager_fast_vulture_1756681377
jouuer
2025-08-31T23:03:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "eager fast vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T23:02:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - eager fast vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
golopper/blockassist-bc-savage_pale_rhino_1756680978
golopper
2025-08-31T22:56:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage pale rhino", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T22:56:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage pale rhino --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ypszn/blockassist-bc-yapping_pawing_worm_1756680929
ypszn
2025-08-31T22:56:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping pawing worm", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T22:56:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping pawing worm --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Vasya777/blockassist-bc-lumbering_enormous_sloth_1756680640
Vasya777
2025-08-31T22:51:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lumbering enormous sloth", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T22:51:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lumbering enormous sloth --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
elmenbillion/blockassist-bc-beaked_sharp_otter_1756678904
elmenbillion
2025-08-31T22:47:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "beaked sharp otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T22:47:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - beaked sharp otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).