modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-23 18:28:48
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
573 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-23 18:28:01
card
stringlengths
11
1.01M
internlm/Intern-S1-mini-FP8
internlm
2025-09-23T08:15:08Z
739
1
null
[ "safetensors", "interns1", "image-text-to-text", "conversational", "custom_code", "arxiv:2508.15763", "base_model:internlm/Intern-S1-mini", "base_model:quantized:internlm/Intern-S1-mini", "license:apache-2.0", "fp8", "region:us" ]
image-text-to-text
2025-08-18T06:37:20Z
--- license: apache-2.0 pipeline_tag: image-text-to-text base_model: - internlm/Intern-S1-mini --- ## Intern-S1-mini <div align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/642695e5274e7ad464c8a5ba/E43cgEXBRWjVJlU_-hdh6.png" /> <div>&nbsp;</div> [💻Github Repo](https://github.com/InternLM/Intern-S1) • [🤗Model Collections](https://huggingface.co/collections/internlm/intern-s1-6882e325e8ac1c58ba108aa5) • [📜Technical Report](https://arxiv.org/abs/2508.15763) • [💬Online Chat](https://chat.intern-ai.org.cn/) </div> <p align="center"> 👋 join us on <a href="https://discord.gg/xa29JuW87d" target="_blank">Discord</a> and <a href="https://cdn.vansin.top/intern-s1.jpg" target="_blank">WeChat</a> </p> ## Introduction We introduce **Intern-S1-mini**, a lightweight open-source multimodal reasoning model based on the same techniques as **[Intern-S1](https://huggingface.co/internlm/Intern-S1)**. Built upon a 8B dense language model (Qwen3) and a 0.3B Vision encoder (InternViT), Intern-S1-mini has been further pretrained on **5 trillion tokens** of multimodal data, including over **2.5 trillion scientific-domain tokens**. This enables the model to retain strong general capabilities while excelling in specialized scientific domains such as **interpreting chemical structures, understanding protein sequences, and planning compound synthesis routes**, making Intern-S1-mini to be a capable research assistant for real-world scientific applications. ## Features - Strong performance across language and vision reasoning benchmarks, especially scientific tasks. - Continuously pretrained on a massive 5T token dataset, with over 50% specialized scientific data, embedding deep domain expertise. - Dynamic tokenizer enables native understanding of molecular formulas and protein sequences. ## Performance We evaluate the Intern-S1-mini on various benchmarks including general datasets and scientific datasets. We report the performance comparison with the recent VLMs and LLMs below. | | | Intern-S1-mini | Qwen3-8B | GLM-4.1V | MiMo-VL-7B-RL-2508 | |------------|----------------|-------------------|----------|----------|--------------------| | General | MMLU-Pro | **74.78** | 73.7 | 57.1 | 73.93 | |   | MMMU | **72.33** | N/A | 69.9 | 70.4 | |   | MMStar | 65.2 | N/A | 71.5 | 72.9 | |   | GPQA | **65.15** | 62 | 50.32 | 60.35 | |   | AIME2024 | **84.58** | 76 | 36.2 | 72.6 | |   | AIME2025 | **80** | 67.3 | 32 | 64.4 | |   | MathVision | 51.41 | N/A | 53.9 | 54.5 | |   | MathVista | 70.3 | N/A | 80.7 | 79.4 | |   | IFEval | 81.15 | 85 | 71.53 | 71.4 | | | | | | | | | Scientific | SFE | 35.84 | N/A | 43.2 | 43.9 | |   | Physics | **28.76** | N/A | 28.3 | 28.2 | |   | SmolInstruct | **32.2** | 17.6 | 18.1 | 16.11 | |   | ChemBench | **76.47** | 61.1 | 56.2 | 66.78 | |   | MatBench | **61.55** | 45.24 | 54.3 | 46.9 | |   | MicroVQA | **56.62** | N/A | 50.2 | 50.96 | |   | ProteinLMBench | 58.47 | 59.1 | 58.3 | 59.8 | |   | MSEarthMCQ | **58.12** | N/A | 50.3 | 47.3 | |   | XLRS-Bench | **51.63** | N/A | 49.8 | 12.29 | We use the [OpenCompass](https://github.com/open-compass/OpenCompass/) and [VLMEvalkit](https://github.com/open-compass/vlmevalkit) to evaluate all models. ## Quick Start ### Sampling Parameters We recommend using the following hyperparameters to ensure better results ```python top_p = 1.0 top_k = 50 min_p = 0.0 temperature = 0.8 ``` ### Transformers The following provides demo code illustrating how to generate based on text and multimodal inputs. > **Please use transformers>=4.55.2 to ensure the model works normally.** #### Text input ```python from transformers import AutoProcessor, AutoModelForCausalLM import torch model_name = "internlm/Intern-S1-mini-FP8" processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True) messages = [ { "role": "user", "content": [ {"type": "text", "text": "tell me about an interesting physical phenomenon."}, ], } ] inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16) generate_ids = model.generate(**inputs, max_new_tokens=32768) decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True) print(decoded_output) ``` #### Image input ```python from transformers import AutoProcessor, AutoModelForCausalLM import torch model_name = "internlm/Intern-S1-mini-FP8" processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True) messages = [ { "role": "user", "content": [ {"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"}, {"type": "text", "text": "Please describe the image explicitly."}, ], } ] inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16) generate_ids = model.generate(**inputs, max_new_tokens=32768) decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True) print(decoded_output) ``` #### Video input Please ensure that the decord video decoding library is installed via `pip install decord`. To avoid OOM, please install flash_attention and use at least 2 GPUS. ```python from transformers import AutoProcessor, AutoModelForCausalLM import torch model_name = "internlm/Intern-S1-mini-FP8" processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True) messages = [ { "role": "user", "content": [ { "type": "video", "url": "https://huggingface.co/datasets/hf-internal-testing/fixtures_videos/resolve/main/tennis.mp4", }, {"type": "text", "text": "What type of shot is the man performing?"}, ], } ] inputs = processor.apply_chat_template( messages, return_tensors="pt", add_generation_prompt=True, video_load_backend="decord", tokenize=True, return_dict=True, ).to(model.device, dtype=torch.float16) generate_ids = model.generate(**inputs, max_new_tokens=32768) decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True) print(decoded_output) ``` ### Serving The minimum hardware requirements for deploying Intern-S1 series models are: | Model | A100(GPUs) | H800(GPUs) | H100(GPUs) | H200(GPUs) | | :---------------------------------------------------------------------: | :--------: | :--------: | :--------: | :--------: | | [internlm/Intern-S1-mini](https://huggingface.co/internlm/Intern-S1-mini) | 1 | 1 | 1 | 1 | | [internlm/Intern-S1-mini-FP8](https://huggingface.co/internlm/Intern-S1-mini-FP8) | - | 1 | 1 | 1 | You can utilize one of the following LLM inference frameworks to create an OpenAI compatible server: #### [lmdeploy (>=0.9.2.post1)](https://github.com/InternLM/lmdeploy) ```bash lmdeploy serve api_server internlm/Intern-S1-mini-FP8 --reasoning-parser intern-s1 --tool-call-parser intern-s1 ``` #### [vllm (>=0.10.1)](https://github.com/vllm-project/vllm) ```bash vllm serve internlm/Intern-S1-mini-FP8 --trust-remote-code ``` #### [sglang](https://github.com/sgl-project/sglang) ```bash python3 -m sglang.launch_server \ --model-path internlm/Intern-S1-mini-FP8 \ --trust-remote-code \ --grammar-backend none ``` #### ollama for local deployment: ```bash # install ollama curl -fsSL https://ollama.com/install.sh | sh # fetch model ollama pull internlm/interns1-mini # run model ollama run internlm/interns1-mini # then use openai client to call on http://localhost:11434/v1 ``` ## Advanced Usage ### Tool Calling Many Large Language Models (LLMs) now feature **Tool Calling**, a powerful capability that allows them to extend their functionality by interacting with external tools and APIs. This enables models to perform tasks like fetching up-to-the-minute information, running code, or calling functions within other applications. A key advantage for developers is that a growing number of open-source LLMs are designed to be compatible with the OpenAI API. This means you can leverage the same familiar syntax and structure from the OpenAI library to implement tool calling with these open-source models. As a result, the code demonstrated in this tutorial is versatile—it works not just with OpenAI models, but with any model that follows the same interface standard. To illustrate how this works, let's dive into a practical code example that uses tool calling to get the latest weather forecast (based on lmdeploy api server). ```python from openai import OpenAI import json def get_current_temperature(location: str, unit: str = "celsius"): """Get current temperature at a location. Args: location: The location to get the temperature for, in the format "City, State, Country". unit: The unit to return the temperature in. Defaults to "celsius". (choices: ["celsius", "fahrenheit"]) Returns: the temperature, the location, and the unit in a dict """ return { "temperature": 26.1, "location": location, "unit": unit, } def get_temperature_date(location: str, date: str, unit: str = "celsius"): """Get temperature at a location and date. Args: location: The location to get the temperature for, in the format "City, State, Country". date: The date to get the temperature for, in the format "Year-Month-Day". unit: The unit to return the temperature in. Defaults to "celsius". (choices: ["celsius", "fahrenheit"]) Returns: the temperature, the location, the date and the unit in a dict """ return { "temperature": 25.9, "location": location, "date": date, "unit": unit, } def get_function_by_name(name): if name == "get_current_temperature": return get_current_temperature if name == "get_temperature_date": return get_temperature_date tools = [{ 'type': 'function', 'function': { 'name': 'get_current_temperature', 'description': 'Get current temperature at a location.', 'parameters': { 'type': 'object', 'properties': { 'location': { 'type': 'string', 'description': 'The location to get the temperature for, in the format \'City, State, Country\'.' }, 'unit': { 'type': 'string', 'enum': [ 'celsius', 'fahrenheit' ], 'description': 'The unit to return the temperature in. Defaults to \'celsius\'.' } }, 'required': [ 'location' ] } } }, { 'type': 'function', 'function': { 'name': 'get_temperature_date', 'description': 'Get temperature at a location and date.', 'parameters': { 'type': 'object', 'properties': { 'location': { 'type': 'string', 'description': 'The location to get the temperature for, in the format \'City, State, Country\'.' }, 'date': { 'type': 'string', 'description': 'The date to get the temperature for, in the format \'Year-Month-Day\'.' }, 'unit': { 'type': 'string', 'enum': [ 'celsius', 'fahrenheit' ], 'description': 'The unit to return the temperature in. Defaults to \'celsius\'.' } }, 'required': [ 'location', 'date' ] } } }] messages = [ {'role': 'user', 'content': 'Today is 2024-11-14, What\'s the temperature in San Francisco now? How about tomorrow?'} ] openai_api_key = "EMPTY" openai_api_base = "http://0.0.0.0:23333/v1" client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) model_name = client.models.list().data[0].id response = client.chat.completions.create( model=model_name, messages=messages, max_tokens=32768, temperature=0.8, top_p=0.8, stream=False, extra_body=dict(spaces_between_special_tokens=False, enable_thinking=False), tools=tools) print(response.choices[0].message) messages.append(response.choices[0].message) for tool_call in response.choices[0].message.tool_calls: tool_call_args = json.loads(tool_call.function.arguments) tool_call_result = get_function_by_name(tool_call.function.name)(**tool_call_args) tool_call_result = json.dumps(tool_call_result, ensure_ascii=False) messages.append({ 'role': 'tool', 'name': tool_call.function.name, 'content': tool_call_result, 'tool_call_id': tool_call.id }) response = client.chat.completions.create( model=model_name, messages=messages, temperature=0.8, top_p=0.8, stream=False, extra_body=dict(spaces_between_special_tokens=False, enable_thinking=False), tools=tools) print(response.choices[0].message.content) ``` ### Switching Between Thinking and Non-Thinking Modes Intern-S1-mini enables thinking mode by default, enhancing the model's reasoning capabilities to generate higher-quality responses. This feature can be disabled by setting `enable_thinking=False` in `tokenizer.apply_chat_template` ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # think mode indicator ) ``` With LMDeploy serving Intern-S1-mini models, you can dynamically control the thinking mode by adjusting the `enable_thinking` parameter in your requests. ```python from openai import OpenAI import json messages = [ { 'role': 'user', 'content': 'who are you' }, { 'role': 'assistant', 'content': 'I am an AI' }, { 'role': 'user', 'content': 'AGI is?' }] openai_api_key = "EMPTY" openai_api_base = "http://0.0.0.0:23333/v1" client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) model_name = client.models.list().data[0].id response = client.chat.completions.create( model=model_name, messages=messages, temperature=0.8, top_p=0.8, max_tokens=2048, extra_body={ "enable_thinking": False, } ) print(json.dumps(response.model_dump(), indent=2, ensure_ascii=False)) ``` For vllm and sglang users, configure this through, ```python extra_body={ "chat_template_kwargs": {"enable_thinking": False} } ``` ## Fine-tuning See this [documentation](https://github.com/InternLM/Intern-S1/blob/main/docs/sft.md) for more details. ## Citation If you find this work useful, feel free to give us a cite. ``` @misc{bai2025interns1scientificmultimodalfoundation, title={Intern-S1: A Scientific Multimodal Foundation Model}, author={Lei Bai and Zhongrui Cai and Maosong Cao and Weihan Cao and Chiyu Chen and Haojiong Chen and Kai Chen and Pengcheng Chen and Ying Chen and Yongkang Chen and Yu Cheng and Yu Cheng and Pei Chu and Tao Chu and Erfei Cui and Ganqu Cui and Long Cui and Ziyun Cui and Nianchen Deng and Ning Ding and Nanqin Dong and Peijie Dong and Shihan Dou and Sinan Du and Haodong Duan and Caihua Fan and Ben Gao and Changjiang Gao and Jianfei Gao and Songyang Gao and Yang Gao and Zhangwei Gao and Jiaye Ge and Qiming Ge and Lixin Gu and Yuzhe Gu and Aijia Guo and Qipeng Guo and Xu Guo and Conghui He and Junjun He and Yili Hong and Siyuan Hou and Caiyu Hu and Hanglei Hu and Jucheng Hu and Ming Hu and Zhouqi Hua and Haian Huang and Junhao Huang and Xu Huang and Zixian Huang and Zhe Jiang and Lingkai Kong and Linyang Li and Peiji Li and Pengze Li and Shuaibin Li and Tianbin Li and Wei Li and Yuqiang Li and Dahua Lin and Junyao Lin and Tianyi Lin and Zhishan Lin and Hongwei Liu and Jiangning Liu and Jiyao Liu and Junnan Liu and Kai Liu and Kaiwen Liu and Kuikun Liu and Shichun Liu and Shudong Liu and Wei Liu and Xinyao Liu and Yuhong Liu and Zhan Liu and Yinquan Lu and Haijun Lv and Hongxia Lv and Huijie Lv and Qidang Lv and Ying Lv and Chengqi Lyu and Chenglong Ma and Jianpeng Ma and Ren Ma and Runmin Ma and Runyuan Ma and Xinzhu Ma and Yichuan Ma and Zihan Ma and Sixuan Mi and Junzhi Ning and Wenchang Ning and Xinle Pang and Jiahui Peng and Runyu Peng and Yu Qiao and Jiantao Qiu and Xiaoye Qu and Yuan Qu and Yuchen Ren and Fukai Shang and Wenqi Shao and Junhao Shen and Shuaike Shen and Chunfeng Song and Demin Song and Diping Song and Chenlin Su and Weijie Su and Weigao Sun and Yu Sun and Qian Tan and Cheng Tang and Huanze Tang and Kexian Tang and Shixiang Tang and Jian Tong and Aoran Wang and Bin Wang and Dong Wang and Lintao Wang and Rui Wang and Weiyun Wang and Wenhai Wang and Yi Wang and Ziyi Wang and Ling-I Wu and Wen Wu and Yue Wu and Zijian Wu and Linchen Xiao and Shuhao Xing and Chao Xu and Huihui Xu and Jun Xu and Ruiliang Xu and Wanghan Xu and GanLin Yang and Yuming Yang and Haochen Ye and Jin Ye and Shenglong Ye and Jia Yu and Jiashuo Yu and Jing Yu and Fei Yuan and Bo Zhang and Chao Zhang and Chen Zhang and Hongjie Zhang and Jin Zhang and Qiaosheng Zhang and Qiuyinzhe Zhang and Songyang Zhang and Taolin Zhang and Wenlong Zhang and Wenwei Zhang and Yechen Zhang and Ziyang Zhang and Haiteng Zhao and Qian Zhao and Xiangyu Zhao and Xiangyu Zhao and Bowen Zhou and Dongzhan Zhou and Peiheng Zhou and Yuhao Zhou and Yunhua Zhou and Dongsheng Zhu and Lin Zhu and Yicheng Zou}, year={2025}, eprint={2508.15763}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.15763}, } ```
htNghiaaa/DSC25-qwen2.5-7b-finetuned-1-merged
htNghiaaa
2025-09-23T08:14:18Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T08:06:59Z
--- base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** htNghiaaa - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64-0922195515-epoch-5
vectorzhou
2025-09-23T08:12:04Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "fine-tuned", "trl", "extra-gradient", "conversational", "dataset:PKU-Alignment/PKU-SafeRLHF", "arxiv:2503.08942", "base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T06:56:47Z
--- base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT datasets: PKU-Alignment/PKU-SafeRLHF library_name: transformers model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64 tags: - generated_from_trainer - text-generation - fine-tuned - trl - extra-gradient licence: license --- # Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64 This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64-0922195515-epoch-5", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/6kinw4fn) This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942). ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.2 - Pytorch: 2.8.0+cu128 - Datasets: 4.1.1 - Tokenizers: 0.22.1 ## Citations Cite Extragradient as: ```bibtex @misc{zhou2025extragradientpreferenceoptimizationegpo, title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback}, author={Runlong Zhou and Maryam Fazel and Simon S. Du}, year={2025}, eprint={2503.08942}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.08942}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
joigalcar/ppo-LunarLander-v2_Scratch_2
joigalcar
2025-09-23T08:11:47Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2025-09-23T08:11:40Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -111.48 +/- 53.00 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 100000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'joigalcar/ppo-LunarLander-v2_Scratch_2' 'batch_size': 512 'minibatch_size': 128} ```
tamewild/4b_v122_merged_e5
tamewild
2025-09-23T08:11:06Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T08:09:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
KarusG/blockassist
KarusG
2025-09-23T08:08:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scaly sniffing deer", "arxiv:2504.07091", "region:us" ]
null
2025-09-22T09:25:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scaly sniffing deer --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ganga4364/Garchen_Rinpoche-whisper_latin_tibetan_added_on_uni_Checkpoint-4000
ganga4364
2025-09-23T08:07:15Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-23T08:07:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tkcho/domain_97fc1e6f533ce74eb2276452650dab60
tkcho
2025-09-23T08:05:17Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-09-03T00:46:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tkcho/domain_b3e637644f1fe22247aa8317d31911ab
tkcho
2025-09-23T08:04:29Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-09-03T01:04:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
eiknarf/Smoothie-Qwen3-1.7B-Gensyn-Swarm-scavenging_playful_stingray
eiknarf
2025-09-23T08:00:53Z
28
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am scavenging_playful_stingray", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-21T19:39:10Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am scavenging_playful_stingray --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pepijn223/pi05_base_fp32
pepijn223
2025-09-23T08:00:22Z
186
1
null
[ "safetensors", "region:us" ]
null
2025-09-09T14:55:33Z
# π₀.₅ - Base This is a PyTorch version of the PI0.5 `pi05_base model`, converted from the original JAX/Flax implementation. ## Model Details - **Architecture**: PI0.5 (Vision-Language-Action model with discrete state input) - **Model Type**: PI0.5 - **Domain**: Base model (general purpose) - **Precision**: 32-bit floating point (fp32) - **Vision Model**: PaliGemma (gemma_2b) - **Action Expert**: gemma_300m ## Key Features - **Discrete State Input**: Uses discrete language tokens for state representation - **Flow Matching**: Utilizes adaRMSNorm for timestep injection in action expert - **Enhanced Action Modeling**: Improved action prediction with flow matching approach ## Conversion Details This model was converted from JAX to PyTorch using the OpenPI conversion script: ```bash python examples/convert_jax_model_to_pytorch.py \ --checkpoint_dir /pi05_base \ --config_name pi05_base \ --output_path /pi05_base/pytorch/fp32/ \ --precision float32 ``` ## Usage ```python from openpi.models_pytorch.pi0_pytorch import PI0Pytorch import torch # Load the model model = PI0Pytorch.from_pretrained("pepijn223/pi05_base_fp32") # The model expects inputs in the format: # - images: torch.Tensor of shape [batch, height, width, channels] # - text: tokenized text prompts # - proprioceptive_state: robot state information (if applicable) ``` ## Model Architecture The model consists of: 1. **Vision Encoder**: PaliGemma-based vision processing 2. **Language Encoder**: Text prompt understanding 3. **Action Expert**: Specialized network for action prediction 4. **Integration Layer**: Combines multimodal information for action output ## Training Data This model was trained on robotics datasets appropriate for its domain: - **DROID models**: Trained on diverse robot manipulation data - **LIBERO models**: Trained on diverse tabletop manipulation scenarios - **Base models**: Trained on general robotics datasets ## Limitations - Model performance depends on similarity between deployment and training environments - May require domain-specific fine-tuning for optimal performance - Action space must match the trained action dimension (32) ## Citation If you use this model, please cite the original OpenPI work: ```bibtex @article{openpi2024, title={Open-World Robotic Manipulation with Vision-Language-Action Models}, author={Physical Intelligence}, year={2024}, url={https://github.com/Physical-Intelligence/openpi} } ``` ## Original Repository [OpenPI GitHub Repository](https://github.com/Physical-Intelligence/openpi) ## License This model follows the same license as the original OpenPI repository.
shubhamprshr/Qwen2.5-3B-Instruct_blocksworld1246_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200
shubhamprshr
2025-09-23T07:59:07Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "grpo", "trl", "conversational", "dataset:blocksworld-dataset", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T23:59:34Z
--- base_model: Qwen/Qwen2.5-3B-Instruct datasets: blocksworld-dataset library_name: transformers model_name: Qwen2.5-3B-Instruct_blocksworld1246_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200 tags: - generated_from_trainer - grpo - trl licence: license --- # Model Card for Qwen2.5-3B-Instruct_blocksworld1246_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200 This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [blocksworld-dataset](https://huggingface.co/datasets/blocksworld-dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="shubhamprshr/Qwen2.5-3B-Instruct_blocksworld1246_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shubhamprshr27-tamu/auto/runs/6to8yztb) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.19.1 - Transformers: 4.53.1 - Pytorch: 2.7.0 - Datasets: 4.1.1 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
aryanmalik/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-toothy_shiny_manatee
aryanmalik
2025-09-23T07:59:03Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am toothy_shiny_manatee", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T07:58:28Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am toothy_shiny_manatee --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
guangyaoz/dpo
guangyaoz
2025-09-23T07:56:38Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "dpo", "trl", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-07-31T05:09:42Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: transformers model_name: dpo tags: - generated_from_trainer - dpo - trl licence: license --- # Model Card for dpo This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="guangyaoz/dpo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.20.0 - Transformers: 4.53.2 - Pytorch: 2.7.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.21.2 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Gigaszi/panoramic-waste-detection
Gigaszi
2025-09-23T07:54:03Z
0
0
null
[ "en", "base_model:Ultralytics/YOLO11", "base_model:finetune:Ultralytics/YOLO11", "license:unlicense", "region:us" ]
null
2025-09-22T13:24:29Z
--- license: unlicense language: - en base_model: - Ultralytics/YOLO11 ---
jd-opensource/JSL-joysafety-v1
jd-opensource
2025-09-23T07:51:20Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-23T07:51:20Z
--- license: apache-2.0 ---
vinchu/QwenH
vinchu
2025-09-23T07:46:30Z
16
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T07:45:05Z
--- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** vinchu - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
shui1010/shui1010_old
shui1010
2025-09-23T07:45:59Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-23T06:44:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
genies-llm/text2sql-grpo-intermediate-reward
genies-llm
2025-09-23T07:45:43Z
17
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:Genies/text2sql-grpo-d6", "arxiv:2402.03300", "base_model:Genies/text2sql-sft-kumar-v4", "base_model:finetune:Genies/text2sql-sft-kumar-v4", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-11T04:31:04Z
--- base_model: Genies/text2sql-sft-kumar-v4 datasets: Genies/text2sql-grpo-d6 library_name: transformers model_name: text2sql-grpo-intermediate-reward tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for text2sql-grpo-intermediate-reward This model is a fine-tuned version of [Genies/text2sql-sft-kumar-v4](https://huggingface.co/Genies/text2sql-sft-kumar-v4) on the [Genies/text2sql-grpo-d6](https://huggingface.co/datasets/Genies/text2sql-grpo-d6) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="genies-llm/text2sql-grpo-intermediate-reward", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/genies-rnd/text2sql-rl/runs/9f5vruq2) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.3 - Pytorch: 2.7.0a0+git295f2ed - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758613436
poolkiltzn
2025-09-23T07:45:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vigilant alert tuna", "arxiv:2504.07091", "region:us" ]
null
2025-09-23T07:44:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vigilant alert tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Huang7gege/xTTS
Huang7gege
2025-09-23T07:45:04Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-23T07:43:29Z
--- license: apache-2.0 ---
deadman44/Qwen-Image_LoRA
deadman44
2025-09-23T07:40:52Z
0
3
null
[ "text-to-image", "qwen image", "safetensors", "en", "license:apache-2.0", "region:us" ]
text-to-image
2025-09-18T00:50:15Z
--- license: apache-2.0 pipeline_tag: text-to-image language: - en tags: - text-to-image - qwen image - safetensors --- <style> .title{ font-size: 2.5em; letter-spacing: 0.01em; padding: 0.5em 0; } .thumbwidth{ max-width: 180px; } .font_red{ color:red; } .font_blue{ color:blue; } .font_grey{ color: #aaaaaa; } </style> # models - Add [lora_qwen_myjc_v01](#myjc) (<span class="font_blue">Qwen-Image LoRA</span>):2025-09-23<br /> --- <br> # Sample Workflow ### - [Workflow for myxx series LoRA](https://huggingface.co/deadman44/Qwen-Image_LoRA/raw/main/workflow/qwen_image.json)<br> - <span class="font_blue">reccomended</span><br/> <br> ### - [Workflow Triple_test](https://huggingface.co/deadman44/Qwen-Image_LoRA/raw/main/workflow/qwen_image_Triple.json)<br> - <span class="font_red">The image looks good overall, but it has quite a few visual glitches.</span> <br> --- <a id="myjc"></a> <h1 class="title"> <span>lora_qwen_myjc_v01</span> </h1> -<span class="font_red">Lora for Qwen-Image</span><br/> -<span class="font_blue">natural Japanese JC face</span><br/> <br/> <br/> # Download [Download: myjc_v01](https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/lora_qwen_myjc_v01.safetensors?download=true) <br /> <br /> # Trigger ```bash myjc, japanese/european, photorealistic and 13-15yo ``` <br /> # Sample prompt <div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;"> <a href="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923154926_qwen_image_00001_.png" target="_blank"> <img src="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923154926_qwen_image_00001_.jpg" alt="T2I" style="width: 360px; height: auto; object-fit: contain; border: 1px solid #ccc;"> </a> </div> ```bash 15yo, myjc, japanese, a Japanese schoolgirl in uniform holding a flip board with "qwen" written on it, smiling awkwardly after a small clumsy mistake, like nearly tripping or dropping her pen, surrounded by classmates laughing gently, warm afternoon sunlight, cherry blossoms in the background, cinematic composition, soft shadows, emotionally expressive, humorous and heartwarming mood ``` <br/> <div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;"> <a href="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923155356_qwen_image_00001_.png" target="_blank"> <img src="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923155356_qwen_image_00001_.jpg" alt="T2I" style="width: 360px; height: auto; object-fit: contain; border: 1px solid #ccc;"> </a> </div> ```bash 15yo, myjc, japanese, black hair, This photograph of a girl sitting on a bench in a train. She has straight long black twintails and is wearing a short sleeve white shirt with a collar and a grey pleated skirt. Her posture is relaxed and her expression is neutral. She holds a smartphone in her right hand looking at camera. A black handbag is placed on her lap. The background shows the interior of a train car with metallic walls and a green and white patterned seat. The lighting is dim and the overall atmosphere is typical of a public transportation setting. The image is candid and captures a moment of casualness and comfort. ``` <br/> <strng>Normal Node (reccomended)</strong> <div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;"> <a href="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923161905_qwen_image_00001_.png" target="_blank"> <img src="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923161905_qwen_image_00001_.jpg" alt="T2I" style="width: 360px; height: auto; object-fit: contain; border: 1px solid #ccc;"> </a> </div> <strng>Triple Node (experimental)</strong> <div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;"> <a href="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923162424_qwen_image_00001_.png" target="_blank"> <img src="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923162424_qwen_image_00001_.jpg" alt="T2I" style="width: 360px; height: auto; object-fit: contain; border: 1px solid #ccc;"> </a> </div> ```bash 14yo, myjc, japanese, straight long hair, bangs, smile, The photograph of a young girl in casual uniform lie on your back on a table surrounded by several men. The background is a dark restaurant and the girl is illuminated by lights. The image is viewed from an angle. ``` <br/> <div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;"> <a href="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923160144_qwen_image_00001_.png" target="_blank"> <img src="https://huggingface.co/deadman44/Qwen-Image_LoRA/resolve/main/sample_images/20250923160144_qwen_image_00001_.jpg" alt="T2I" style="width: 480px; height: auto; object-fit: contain; border: 1px solid #ccc;"> </a> </div> ```bash 15yo, myjc, japanese, five schoolgirls in sailor uniforms striking playful sentai-style poses on a quiet urban street, each with a different hairstyle: ponytail, short bob, twin braids, loose long hair, and side bun, natural lighting, casual atmosphere, no special effects, soft shadows, relaxed expressions, subtle smiles, everyday setting with buildings and trees in the background, cinematic composition, emotionally expressive, group coordination with individuality ``` <br/> ---
y1y2y3/act_100k_v3
y1y2y3
2025-09-23T07:40:47Z
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:y1y2y3/so101_test4", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-09-23T02:54:33Z
--- datasets: y1y2y3/so101_test4 library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - act - lerobot - robotics --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
romolocaponera/ppo-PyramidsRND-1
romolocaponera
2025-09-23T07:40:10Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2025-09-23T07:40:06Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: romolocaponera/ppo-PyramidsRND-1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64-0922195521-epoch-5
vectorzhou
2025-09-23T07:39:51Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "fine-tuned", "trl", "extra-gradient", "conversational", "dataset:PKU-Alignment/PKU-SafeRLHF", "arxiv:2503.08942", "base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T06:28:26Z
--- base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT datasets: PKU-Alignment/PKU-SafeRLHF library_name: transformers model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64 tags: - generated_from_trainer - text-generation - fine-tuned - trl - extra-gradient licence: license --- # Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64 This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64-0922195521-epoch-5", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/09vdah42) This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942). ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.2 - Pytorch: 2.8.0+cu128 - Datasets: 4.1.1 - Tokenizers: 0.22.1 ## Citations Cite Extragradient as: ```bibtex @misc{zhou2025extragradientpreferenceoptimizationegpo, title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback}, author={Runlong Zhou and Maryam Fazel and Simon S. Du}, year={2025}, eprint={2503.08942}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.08942}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mihirr01/gemma3-1B-IT
mihirr01
2025-09-23T07:39:25Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/gemma-3-270m-it", "lora", "sft", "transformers", "trl", "unsloth", "arxiv:1910.09700", "base_model:unsloth/gemma-3-270m-it", "region:us" ]
null
2025-09-23T07:32:00Z
--- base_model: unsloth/gemma-3-270m-it library_name: peft tags: - base_model:adapter:unsloth/gemma-3-270m-it - lora - sft - transformers - trl - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
manycore-research/SpatialGen-1.0
manycore-research
2025-09-23T07:39:11Z
214
21
diffusers
[ "diffusers", "safetensors", "image-to-3d", "dataset:manycore-research/SpatialGen-Testset", "arxiv:2509.14981", "base_model:stabilityai/stable-diffusion-2-1", "base_model:finetune:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "diffusers:SpatialGenDiffusionPipeline", "region:us" ]
image-to-3d
2025-08-20T13:47:57Z
--- base_model: - stabilityai/stable-diffusion-2-1 datasets: - manycore-research/SpatialGen-Testset license: creativeml-openrail-m pipeline_tag: image-to-3d --- # SpatialGen: Layout-guided 3D Indoor Scene Generation <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <picture> <source srcset="https://cdn-uploads.huggingface.co/production/uploads/6437c0ead38ce48bdd4b0067/myrWYVNd4m-DuxV39VQZ0.png" media="(prefers-color-scheme: dark)"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6437c0ead38ce48bdd4b0067/QQvDtmokH4ZjwH0wppqFC.png" width="60%" alt="SpatialLM""/> </picture> </div> <hr style="margin-top: 0; margin-bottom: 8px;"> <div align="center" style="margin-top: 0; padding-top: 0; line-height: 1;"> <a href="https://manycore-research.github.io/SpatialGen" target="_blank" style="margin: 2px;"><img alt="Project" src="https://img.shields.io/badge/🌐%20Project-SpatialGen-ffc107?color=42a5f5&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a> <a href="https://arxiv.org/abs/2509.14981" target="_blank" style="margin: 2px;"><img alt="arXiv" src="https://img.shields.io/badge/arXiv-SpatialGen-b31b1b?logo=arxiv&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a> <a href="https://github.com/manycore-research/SpatialGen" target="_blank" style="margin: 2px;"><img alt="GitHub" src="https://img.shields.io/badge/GitHub-SpatialGen-24292e?logo=github&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a> <a href="https://huggingface.co/manycore-research/SpatialGen-1.0" target="_blank" style="margin: 2px;"><img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-SpatialGen-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a> </div> <div align="center"> | Image-to-Scene Results | Text-to-Scene Results | | :--------------------------------------: | :----------------------------------------: | | ![Img2Scene](https://cdn-uploads.huggingface.co/production/uploads/6437c0ead38ce48bdd4b0067/ksN5t8QEu3Iv6KhpsYsk6.png) | ![Text2Scene](https://cdn-uploads.huggingface.co/production/uploads/6437c0ead38ce48bdd4b0067/waCRa3kp01KAsKgmqS1bb.png) | <p>TL;DR: Given a 3D semantic layout, SpatialGen can generate a 3D indoor scene conditioned on either a reference image (left) or a textual description (right) using a multi-view, multi-modal diffusion model.</p> </div> ## ✨ News - [Sep, 2025] We released the paper of SpatialGen! - [Aug, 2025] Initial release of SpatialGen-1.0! ## 📋 Release Plan - [x] Provide inference code of SpatialGen. - [ ] Provide training instruction for SpatialGen. - [ ] Release SpatialGen dataset. ## SpatialGen Models <div align="center"> | **Model** | **Download** | | :-----------------------: | -------------------------------------------------------------------------------------| | SpatialGen-1.0 | [🤗 HuggingFace](https://huggingface.co/manycore-research/SpatialGen-1.0) | | FLUX.1-Layout-ControlNet | [🤗 HuggingFace](https://huggingface.co/manycore-research/FLUX.1-Layout-ControlNet) | | FLUX.1-Wireframe-dev-lora | [🤗 HuggingFace](https://huggingface.co/manycore-research/FLUX.1-Wireframe-dev-lora) | </div> ## Usage ### 🔧 Installation Tested with the following environment: * Python 3.10 * PyTorch 2.3.1 * CUDA Version 12.1 ```bash # clone the repository git clone https://github.com/manycore-research/SpatialGen.git cd SpatialGen python -m venv .venv source .venv/bin/activate pip install -r requirements.txt # Optional: fix the [flux inference bug](https://github.com/vllm-project/vllm/issues/4392) pip install nvidia-cublas-cu12==12.4.5.8 ``` ### 📊 Dataset We provide [SpatialGen-Testset](https://huggingface.co/datasets/manycore-research/SpatialGen-Testset) with 48 rooms, which labeled with 3D layout and 4.8K rendered images (48 x 100 views, including RGB, normal, depth maps and semantic maps) for MVD inference. ### Inference ```bash # Single image-to-3D Scene bash scripts/infer_spatialgen_i2s.sh # Text-to-image-to-3D Scene # in captions/spatialgen_testset_captions.jsonl, we provide text prompts of different styles for each room, # choose a pair of scene_id and prompt to run the text2scene experiment bash scripts/infer_spatialgen_t2s.sh ``` ## License [SpatialGen-1.0](https://huggingface.co/manycore-research/SpatialGen-1.0) is derived from [Stable-Diffusion-v2.1](https://github.com/Stability-AI/stablediffusion), which is licensed under the [CreativeML Open RAIL++-M License](https://github.com/Stability-AI/stablediffusion/blob/main/LICENSE-MODEL). [FLUX.1-Layout-ControlNet](https://huggingface.co/manycore-research/FLUX.1-Layout-ControlNet) is licensed under the [FLUX.1-dev Non-Commercial License](https://github.com/black-forest-labs/flux/blob/main/model_licenses/LICENSE-FLUX1-dev). ## Acknowledgements We would like to thank the following projects that made this work possible: [DiffSplat](https://github.com/chenguolin/DiffSplat) | [SD 2.1](https://github.com/Stability-AI/stablediffusion) | [TAESD](https://github.com/madebyollin/taesd) | [FLUX](https://github.com/black-forest-labs/flux/) | [SpatialLM](https://github.com/manycore-research/SpatialLM) ## Citation ```bibtex @article{SpatialGen, title = {SpatialGen: Layout-guided 3D Indoor Scene Generation}, author = {Fang, Chuan and Li, Heng and Liang, Yixu and Zheng, Jia and Mao, Yongsen and Liu, Yuan and Tang, Rui and Zhou, Zihan and Tan, Ping}, journal = {arXiv preprint}, year = {2025}, eprint = {2509.14981}, archivePrefix = {arXiv}, primaryClass = {cs.CV} } ```
TerralinKapseln4/PurivaPillen
TerralinKapseln4
2025-09-23T07:36:20Z
0
0
null
[ "region:us" ]
null
2025-09-23T07:34:33Z
Puriva staat synoniem voor moderne technologieën en milieuvriendelijke processen. Het bedrijf is gespecialiseerd in energiezuinige airconditioningsystemen voor residentiële en commerciële toepassingen. Van compacte split-airconditioners voor woonruimtes tot hoogwaardige systemen voor kantoren en commerciële gebouwen, Puriva biedt oplossingen op maat die voldoen aan de nieuwste Europese energie-efficiëntienormen. Veel units zijn uitgerust met slimme functies, zoals wifi-bediening of invertertechnologie, die het comfort verbeteren en het energieverbruik verlagen. ## [Klik hier om te bestellen op de officiële website van Puriva Pillen](https://puriva-pillen.nl) ## Puriva in Nederland: Een innovatieve leverancier in de sector huishoudelijke apparaten ### Ontdek Puriva Puriva GmbH heeft zich in Nederland gevestigd als een toonaangevende leverancier in de dynamische distributiesector. Gespecialiseerd in de groothandel en detailhandel van kleine en grote huishoudelijke apparaten, waaronder airconditioners, heeft Puriva met succes een aantrekkelijke nichemarkt gecreëerd voor zowel particuliere als zakelijke klanten. Het bedrijf, opgericht op 18 november 2021, heeft zijn hoofdkantoor in Uetze, Nedersaksen, en is ingeschreven in het Handelsregister van Hannover onder HRB 224521. Puriva staat voor kwaliteit, innovatie en klantgerichtheid, waardoor het een relevant onderwerp is voor dit artikel. In dit artikel gaan we dieper in op Puriva GmbH, haar missie, haar producten en diensten en hun impact op de Duitse markt. We onderzoeken hoe Puriva voldoet aan de verwachtingen van moderne consumenten door haar unieke positionering en focus op duurzame technologieën. ### Oorsprong en Visie van Puriva Puriva GmbH is opgericht met een duidelijke visie: hoogwaardige elektrische apparaten, waaronder airconditioners, aanbieden aan zowel particuliere als zakelijke klanten. Het hoofdkantoor is gevestigd aan de Burgdorfer Straße 85-89 in Uetze, een strategische locatie nabij Hannover, die efficiënte logistiek en distributie mogelijk maakt. Met een aandelenkapitaal van € 25.000 beschikt Puriva over een solide financiële basis om haar ambitieuze doelen te bereiken. De missie van Puriva reikt verder dan de detailhandel. Het bedrijf streeft ernaar innovatieve, energiezuinige oplossingen te bieden die voldoen aan de behoeften van een milieubewuste klantenkring. In een tijd waarin duurzaamheid en energiebesparing steeds belangrijker worden, positioneert Puriva zich als een bedrijf dat niet alleen prioriteit geeft aan kwaliteit, maar ook aan milieuverantwoordelijkheid. Dit komt tot uiting in de productselectie, die is ontworpen om lang mee te gaan en hoge prestaties te leveren. ### Puriva Productassortiment: Focus op Airconditioners De verkoop van airconditioners vormt de kern van Puriva's bedrijfsmodel. Deze apparaten zijn essentieel voor zowel particulieren als bedrijven, vooral gezien de stijgende temperaturen in Nederland en de groeiende vraag naar efficiënte koeloplossingen. Puriva biedt een breed scala aan airconditioningsystemen, van compacte split-airco's voor eengezinswoningen tot krachtige systemen voor kantoren en bedrijfspanden. Wat Puriva onderscheidt van andere leveranciers, is de zorgvuldige productselectie. Onze airconditioners zijn niet alleen ontworpen met de nieuwste technologie, maar ook met oog voor energiezuinigheid en gebruiksgemak. Veel van de aangeboden units voldoen aan de nieuwste Europese normen voor energieverbruik en zijn daardoor milieuvriendelijk en zuinig. Puriva hecht bovendien veel waarde aan een aantrekkelijk design dat naadloos aansluit op diverse woon- en bedrijfsomgevingen. Naast airconditioners omvat het Puriva-assortiment ook accessoires zoals luchtreinigers, verwarmingstoestellen en andere kleine elektrische apparaten die de productlijn compleet maken. Deze diversiteit stelt Puriva in staat om te voldoen aan een breed scala aan behoeften, van eenvoudige oplossingen voor residentieel gebruik tot complexe apparatuur voor bedrijfspanden. ## [Klik hier om te bestellen op de officiële website van Puriva Pillen](https://puriva-pillen.nl) ### E-commerce: Puriva's digitale aanwezigheid De focus op e-commerce is een andere belangrijke factor in het succes van Puriva. Nu steeds meer consumenten online winkelen, heeft Puriva een gebruiksvriendelijk platform gecreëerd waarmee klanten hoogwaardige huishoudelijke apparaten kunnen kopen vanuit het comfort van hun eigen huis. De intuïtieve website van het bedrijf biedt gedetailleerde productbeschrijvingen, klantbeoordelingen en technische specificaties, waardoor weloverwogen aankoopbeslissingen mogelijk zijn. De online winkel van Puriva kenmerkt zich door snelle levertijden en betrouwbare klantenservice. Klanten kunnen niet alleen kiezen uit een breed scala aan producten, maar profiteren ook van uitgebreide ondersteuning, van advies voorafgaand aan de aankoop tot installatie en onderhoud. Deze allesomvattende aanpak maakt Puriva een vertrouwde partner voor huishoudelijke apparaten in Nederland. ### Duurzaamheid en verantwoordelijkheid Duurzame ontwikkeling is een belangrijk thema in de huidige economie en Puriva zet zich hiervoor in. De airconditioners en huishoudelijke apparaten van het bedrijf zijn ontworpen om het energieverbruik te minimaliseren en tegelijkertijd de prestaties te optimaliseren. Dit is met name belangrijk in een land als Nederland, waar de energietransitie en klimaatbescherming topprioriteiten zijn. Puriva werkt ook samen met gelijkgestemde leveranciers. De producten worden geproduceerd volgens milieunormen en het bedrijf geeft prioriteit aan recyclebare materialen en duurzame componenten. Puriva helpt zo de ecologische voetafdruk van haar klanten te verkleinen en tegelijkertijd hoogwaardige producten te bieden. Puriva's duurzaamheidsstrategie omvat ook het promoten van recyclingprogramma's. Klanten worden aangemoedigd hun oude apparaten op de juiste manier af te voeren. In samenwerking met haar partners biedt Puriva retour- en recyclingoplossingen voor elektrische apparaten. Deze aanpak laat zien dat Puriva rekening houdt met de volledige levenscyclus van haar producten, niet alleen met de verkoop. ### Uitdagingen en kansen Net als elk bedrijf staat Puriva voor uitdagingen. De markt voor elektrische apparaten is sterk gereguleerd en naleving van milieuvoorschriften en veiligheidsnormen vereist voortdurende aanpassing. Puriva ziet echter ook kansen. Door te investeren in onderzoek en ontwikkeling blijft het bedrijf voorop lopen in technologische innovatie en kan het nieuwe producten ontwikkelen die voldoen aan de toekomstige vraag. Concurrentie van grote winkelketens en e-commercegiganten is een andere factor om rekening mee te houden. Puriva onderscheidt zich echter van de concurrentie door specialisatie en uitstekende klantenservice. Dankzij gerichte marketingstrategieën en een sterke online aanwezigheid weet Puriva klantenbinding op te bouwen. ## Conclusie Puriva GmbH is een veelbelovend bedrijf dat zich snel heeft gevestigd op de Duitse markt voor huishoudelijke apparaten. De hoogwaardige airconditioningsystemen, duurzame oplossingen en uitstekende klantenservice onderscheiden Puriva van de concurrentie. De combinatie van innovatieve producten, een sterke online aanwezigheid en een duidelijke toewijding aan duurzame ontwikkeling maakt Puriva tot een bedrijf dat de toekomst van de Franse huishoudelijke apparatenindustrie helpt vormgeven. Zowel particulieren als bedrijven bieden oplossingen op maat die voldoen aan de behoeften van een moderne, milieubewuste samenleving. Dankzij de strategische focus en toewijding aan kwaliteit en innovatie zal Puriva ongetwijfeld een belangrijke rol blijven spelen op de Duitse markt. ## [Klik hier om te bestellen op de officiële website van Puriva Pillen](https://puriva-pillen.nl)
tkcho/domain_48714d83acf3986aa7f6463b35ffa16e
tkcho
2025-09-23T07:34:34Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-07-08T00:44:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DennisS1/dgy
DennisS1
2025-09-23T07:34:28Z
24
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:Qwen/Qwen-Image", "base_model:adapter:Qwen/Qwen-Image", "region:us" ]
text-to-image
2025-09-23T07:32:22Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/Screen Shot 2025-09-23 at 5.32.18 pm.png text: Screenshot base_model: Qwen/Qwen-Image instance_prompt: doggy --- # dgy <Gallery /> ## Trigger words You should use `doggy` to trigger the image generation. ## Download model [Download](/DennisS1/dgy/tree/main) them in the Files & versions tab.
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-EGPO-0.1-mnt64-0922195506-epoch-5
vectorzhou
2025-09-23T07:33:10Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "fine-tuned", "trl", "extra-gradient", "conversational", "dataset:PKU-Alignment/PKU-SafeRLHF", "arxiv:2503.08942", "base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T06:22:23Z
--- base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT datasets: PKU-Alignment/PKU-SafeRLHF library_name: transformers model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-EGPO-0.1-mnt64 tags: - generated_from_trainer - text-generation - fine-tuned - trl - extra-gradient licence: license --- # Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-EGPO-0.1-mnt64 This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-EGPO-0.1-mnt64-0922195506-epoch-5", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/2zoaj66c) This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942). ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.2 - Pytorch: 2.8.0+cu128 - Datasets: 4.1.1 - Tokenizers: 0.22.1 ## Citations Cite Extragradient as: ```bibtex @misc{zhou2025extragradientpreferenceoptimizationegpo, title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback}, author={Runlong Zhou and Maryam Fazel and Simon S. Du}, year={2025}, eprint={2503.08942}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.08942}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Fatin757/ssf-retriever-modernbert-v7
Fatin757
2025-09-23T07:32:02Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "modernbert", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:7540", "loss:MultipleNegativesRankingLoss", "dataset:Fatin757/ssf-train-valid_v7", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:nomic-ai/modernbert-embed-base", "base_model:finetune:nomic-ai/modernbert-embed-base", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-23T07:31:55Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:7540 - loss:MultipleNegativesRankingLoss base_model: nomic-ai/modernbert-embed-base widget: - source_sentence: The Scriptwriter/Writer is responsible for creating blueprints and details of the script based on the concept or idea. With a deep understanding of the storyline, the target audience and the requirements of the creative leadership teams, he/she develops the story elements to translate the creative vision into a beautiful story for production. He works closely with the production teams to review and revise the script based on inputs to fit the potential audience appeal and enhance the suitability and marketability of the production. During the development process, he frequently reviews the work to ensure it meets required editorial standards. He also flags the possibility of legalities that may occur in view of the regulatory requirements and local needs of the primary market and audience. He is expected to work under pressure so as to manage edits within a short time frame. He may be required to travel depending on the production requirements. He should have an understanding on how productions affect audiences and be familiar with the current formats of presenting screenplays. He should be well versed with script-writing guidelines and techniques to be able to develop a full-length script that is production ready within required deadlines. He should also have a fundamental understanding of the process of translating scripts to various visual media, as well as knowledge of script requirements for immersive content. He should possess strong grammar and writing capability as well as creativity, patience, self-motivation and resilience, with an excellent understanding of production processes. sentences: - 'The Senior Script Editor leads a team to oversee the refinement and finalization of scripts, ensuring alignment with creative vision and production standards. They work extensively with directors and producers to approve script revisions and manage legal clearances but do not typically write original scripts themselves. This role focuses more on editorial oversight and coordination than on initial script creation. Travel is frequent to various production sites to supervise script implementation and ensure compliance with regulatory frameworks. The Content Writer for digital marketing develops engaging copy and content strategies tailored for online platforms. They collaborate with marketing teams to optimize content for SEO and audience engagement but do not create scripts for visual media productions. Their work involves writing blogs, social media posts, and promotional materials rather than full-length scripts, with an emphasis on quick turnaround and analytics-driven content performance. The Screenplay Consultant advises film and TV productions on script structure, character development, and market trends. While they provide expert feedback and marketability assessments, they do not write or revise scripts directly. The role requires extensive knowledge of production processes and audience preferences but is primarily advisory, often working remotely without travel requirements.' - The Business Controller/Finance Director acts as the key financial advisor and partner to all business units within the organisation. This role entails offering expert accounting guidance to stakeholders to enhance organisational value while mitigating risks in line with both external regulations and internal policies. The Business Controller/Finance Director excels in building strong relationships and exploring new business opportunities. Additionally, they play a vital role in financial planning and analysis, supporting management decisions, managing operational risks, and ensuring effective business performance through profitability and operational reviews. The role also includes responsibilities such as recruitment, performance evaluation, and identifying training needs for staff across the organisation. - The Scriptwriter/Writer crafts detailed script blueprints based on concepts or ideas. With a thorough grasp of the storyline, target audience, and creative leadership needs, they develop story elements that bring the creative vision to life for production. Collaborating closely with production teams, they revise scripts according to feedback to enhance audience appeal and marketability. Throughout development, they ensure editorial standards are met and identify potential legal issues related to regulatory and local market requirements. The role demands managing edits under tight deadlines and may involve travel based on production needs. The Scriptwriter/Writer understands audience impact, current screenplay formats, and script-writing techniques to deliver production-ready full-length scripts on time. They also have foundational knowledge of adapting scripts for various visual media and immersive content, paired with strong grammar, creativity, patience, self-motivation, resilience, and a solid understanding of production processes. - source_sentence: The Executive - Localisation coordinates internal and external processes to execute the localisation of the organisation's content for delivery to specific territories. He/She maintains day-to-day communication with internal localisation teams and vendors to monitor the progress of specific projects. He is also responsible for communicating expected quality standards for localisation assets to internal localisation teams and localisation vendors. The work involves a high level of coordination and communication with internal and external stakeholders. He spends most of his time liaising with external vendors as well as internal teams for content localisation. He is expected to be effective at planning and stakeholder management in order to coordinate with all stakeholders involved in the localisation processes and projects. sentences: - The Strategy & Governance Director/Assistant Director leads the development and implementation of the organisation's strategic plans and governance frameworks. They are responsible for managing risk, ensuring compliance with governance standards, and collaborating with the Executive Committee, Council, or Board to identify new opportunities for sustainable growth. This role includes coordinating board and management meetings, preparing and presenting reports, and guiding the organisation’s budgeting process. The ideal candidate is strategic, analytical, risk-aware, and skilled at communicating complex decisions to senior leadership and stakeholders. - "The Senior Localisation Manager leads the localisation strategy and oversees\ \ multiple teams to deliver content localisation across global markets, focusing\ \ on high-level project management and vendor negotiations. \nThe Content Marketing\ \ Executive coordinates marketing campaigns and liaises with internal creative\ \ teams and external agencies to promote products within local markets. \nThe\ \ Executive - Translation Services handles the translation of documents and ensures\ \ linguistic accuracy but does not manage the broader localisation process or\ \ vendor relationships." - The Executive - Localisation manages both internal and external workflows to ensure the organisation’s content is accurately localised for targeted regions. He/She maintains continuous communication with internal localisation teams and external vendors to track project progress and ensures all localisation outputs meet established quality standards. This role requires strong coordination and communication skills to work effectively with various stakeholders. The Executive spends a significant portion of time collaborating with vendors and internal teams to facilitate content localisation and is expected to excel in planning and stakeholder engagement to successfully oversee localisation projects. - source_sentence: A Senior Principal Occupational Therapy Manager sets the strategic direction of the department and leads occupational therapists in cluster-wide initiatives to enhance clinical innovation and evidence-based practice. S/He leads change by implementing new or revising policies and driving the corporate governance agenda. S/He is in charge of leading improvements in service delivery and the care model and plans strategies to promote these new improvements and new clinical services. S/He ensures that there is sufficient human resources in the department and manages the budgets in the clinical setting. Her/His core function will be in managerial work, but s/he will also perform some clinical, educational and research tasks in the course of her/his day-to-day work. S/He may work in various settings such as but not limited to public and private institutions, acute and community hospitals, rehabilitation centres, voluntary welfare organisations, schools, integrated and long-term care facilities and clients homes and work environments. S/He may also work as part of collaborative, interdisciplinary teams which may include teachers, nurses, doctors, audiologists, psychologists, social workers, physiotherapists and speech therapists. S/He should be visionary, driven and decisive. S/He should possess effective interpersonal, team-building and leadership skills. sentences: - A Senior Principal Occupational Therapy Manager is responsible for setting the strategic vision of the department and guiding occupational therapists in cluster-wide programs to advance clinical innovation and evidence-based practices. This role involves leading transformation by introducing new or updated policies and championing corporate governance initiatives. The manager oversees enhancements in service delivery and care models, developing strategies to promote these advances and new clinical services. They ensure adequate staffing levels within the department and handle budget management in clinical environments. While primarily focused on managerial duties, the role also includes clinical, educational, and research responsibilities. The Senior Principal Occupational Therapy Manager may operate in diverse settings such as public and private healthcare institutions, acute and community hospitals, rehabilitation centers, voluntary welfare organizations, schools, integrated and long-term care facilities, as well as clients’ homes and workplaces. Collaboration with interdisciplinary teams—including teachers, nurses, doctors, audiologists, psychologists, social workers, physiotherapists, and speech therapists—is integral. The ideal candidate is visionary, motivated, decisive, and demonstrates strong interpersonal, leadership, and team-building capabilities. - The Sales and Purchase Broker serves as a mediator between ship buyers and sellers, managing the transaction process and ensuring adherence to all relevant legal and regulatory standards. This role involves evaluating the feasibility and risks associated with new business prospects and analyzing risk management information to alert management to possible issues. Additionally, the broker offers guidance and hands-on training to junior team members in their routine tasks. - 'The Principal Physiotherapy Manager leads the physiotherapy department by setting clinical priorities and directing physiotherapists across multiple sites to improve patient rehabilitation outcomes. This role focuses on managing clinical protocols, facilitating policy updates, and ensuring compliance with healthcare regulations. The manager supervises staffing and resource allocation while coordinating budget expenditures in rehabilitation settings. Besides managerial responsibilities, the role requires active involvement in patient treatment, training junior therapists, and conducting clinical research. Work environments include hospitals, outpatient clinics, community care centers, and specialized rehabilitation units. Collaboration with multidisciplinary teams such as occupational therapists, nurses, doctors, and social workers is expected. The successful candidate should be proactive, collaborative, and possess strong leadership and communication skills. The Senior Principal Occupational Therapy Manager in a mental health setting develops strategic initiatives to enhance psychosocial rehabilitation programs. This position emphasizes policy development, governance adherence, and service model redesign specifically for mental health occupational therapy services. Responsibilities include resource planning, budget oversight, and leading clinical education tailored to psychiatric care. The role involves working closely with psychiatrists, psychologists, social workers, and peer support specialists in hospital and community mental health facilities. The incumbent must be innovative, resilient, and skilled in interdisciplinary collaboration and team leadership. The Senior Principal Occupational Therapy Manager specializing in pediatric care directs department strategies to improve therapeutic interventions for children with developmental disabilities. The role includes policy formulation, governance, and advancing pediatric occupational therapy practices across multiple community and hospital settings. Responsibilities encompass human resource management, budget control, and conducting clinical research focused' - source_sentence: The Operations Risk and Control Analyst acts as the first line of defence by assisting the management of day-to-day risks. He/She will be responsible for identifying, analysing and documenting operational risk events and incidents for further investigation. He also supports the team in the development and implementation of risk procedures, detailing out required processes, controls and governance standards for all relevant processes. The Operations Risk and Control Analyst is both logical and analytical as his tasks involve monitoring and tracking risks. He is numerically inclined and comfortable with documentation and analysis tasks. He is familiar with spreadsheet software to handle data efficiently. sentences: - Assistant Civil and Structural Engineer job openings in Singapore - 'The Senior Operations Risk Manager leads the risk management team by overseeing strategic risk assessments and coordinating mitigation plans across multiple departments. This role focuses on high-level risk governance and policy development rather than day-to-day operational risk tracking. The Senior Manager also liaises with external auditors and ensures compliance with regulatory requirements, requiring extensive experience in risk frameworks and leadership skills. Advanced data analytics tools and risk management software expertise are preferred. The Business Continuity Analyst is responsible for developing and testing business continuity plans to ensure organizational resilience during disruptions. This role involves coordinating recovery strategies and conducting impact analyses rather than managing daily operational risks. The analyst collaborates with various business units to implement continuity procedures and maintains documentation related to crisis management and recovery protocols. The Compliance Risk Analyst focuses on regulatory compliance risks by monitoring adherence to legal and internal policies. This position entails conducting compliance audits, reporting violations, and recommending corrective actions to mitigate compliance risks. The analyst uses compliance management systems and engages with regulatory bodies, differing from operational risk monitoring and control activities.' - The Operations Risk and Control Analyst serves as the initial line of defence by supporting the management of daily operational risks. This role involves identifying, analyzing, and documenting risk events and incidents for subsequent review. The analyst also contributes to the formulation and enforcement of risk procedures, outlining necessary processes, controls, and governance standards across relevant operations. With a logical and analytical mindset, the analyst monitors and tracks risks while handling documentation and data analysis. Proficiency in spreadsheet software is essential for efficient data management. - source_sentence: The Managing Director establishes the business strategies for the organisation and develops plans to enable execution of the business strategies. He/She is responsible for tracking market development and trends to inform strategic decision making and ensure the organisation remains current with the changing face of the sector. He leads the organisation's business development efforts to get more projects and grow the business. He also drives the adoption of innovation and new technology to continuously improve the productivity and efficiency of the workforce. The work involves strategic goal setting, business development and business leadership. A significant part of his time goes into external meetings with potential clients for the purpose of business development. He also spends his time developing strategies and plans, and reviewing business and operational performance. He is a strategic thinker and business planner. He is an able leader who guides the organisation and the management in the execution of business plans. He should also be an effective communicator in order to influence external stakeholders. sentences: - business strategy, market analysis, strategic planning, business development, leadership, innovation adoption, technology integration, client relationship management, performance review, communication skills - Demurrage and laytime manager jobs in Singapore - culinary arts, fashion design, gardening, animal care, music theory, painting, carpentry, automotive repair datasets: - Fatin757/ssf-train-valid_v7 pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on nomic-ai/modernbert-embed-base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) on the [ssf-train-valid_v7](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v7) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) <!-- at revision d556a88e332558790b210f7bdbe87da2fa94a8d8 --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [ssf-train-valid_v7](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v7) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'ModernBertModel'}) (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Fatin757/ssf-retriever-modernbert-v7") # Run inference sentences = [ "The Managing Director establishes the business strategies for the organisation and develops plans to enable execution of the business strategies. He/She is responsible for tracking market development and trends to inform strategic decision making and ensure the organisation remains current with the changing face of the sector. He leads the organisation's business development efforts to get more projects and grow the business. He also drives the adoption of innovation and new technology to continuously improve the productivity and efficiency of the workforce. The work involves strategic goal setting, business development and business leadership. A significant part of his time goes into external meetings with potential clients for the purpose of business development. He also spends his time developing strategies and plans, and reviewing business and operational performance. He is a strategic thinker and business planner. He is an able leader who guides the organisation and the management in the execution of business plans. He should also be an effective communicator in order to influence external stakeholders.", 'business strategy, market analysis, strategic planning, business development, leadership, innovation adoption, technology integration, client relationship management, performance review, communication skills', 'culinary arts, fashion design, gardening, animal care, music theory, painting, carpentry, automotive repair', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[ 1.0000, 0.6303, -0.0021], # [ 0.6303, 1.0000, 0.0045], # [-0.0021, 0.0045, 1.0000]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### ssf-train-valid_v7 * Dataset: [ssf-train-valid_v7](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v7) at [0ec0099](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v7/tree/0ec0099d857a1d64007ef973b5a481addf88d623) * Size: 7,540 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 57 tokens</li><li>mean: 168.08 tokens</li><li>max: 380 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 74.36 tokens</li><li>max: 248 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 79.92 tokens</li><li>max: 372 tokens</li></ul> | * Samples: | anchor | positive | negative | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Prop Designers are responsible for identifying and designing appropriate props for a production. They typically work closely with Stage Managers and Set Designers to design and create props that match the style and period of the production. They understand and utilise different tools, methods and materials to create props that look authentic and can produce the desired effects. They are responsible for estimating cost of props and ensuring any purchases and/or rentals fall within the budget. They also manage the prop team's schedule.</code> | <code>The Prop Designer is tasked with selecting and crafting suitable props for theatrical productions. Collaborating closely with Stage Managers and Set Designers, they ensure the props align with the production’s style and era. They apply various tools, techniques, and materials to produce authentic-looking props that achieve the intended visual effects. Additionally, they estimate prop costs and manage procurement or rentals within budget constraints, while overseeing the scheduling of the prop team.</code> | <code>The Retail Store Manager oversees daily retail operations, manages inventory levels, and trains staff to deliver excellent customer service. They ensure the store meets sales targets and maintain a clean, organized shopping environment.<br><br>The Software Developer designs, codes, and tests software applications. They collaborate with cross-functional teams to develop new features and fix bugs, ensuring the software performs efficiently and meets user requirements.<br><br>The Human Resources Coordinator assists with recruitment, employee onboarding, and maintaining personnel records. They support HR initiatives and help facilitate employee engagement programs.</code> | | <code>The Area Manager/District Manager oversees the operations of a group of stores in a given area/district. He/she is responsible for developing business opportunities, managing the areas operational and service excellence plans. In addition, he oversees the order fulfilment processes for customers to ensure seamless customer experience across all channels. He is also responsible for driving the organisations innovation and productivity aspirations across the group of stores. He operates in a fast-paced environment where he is required to attend to operational and service excellence issues across a group of stores with varied characteristics. He promotes a positive working culture across stores and drives the achievement of sales results. He is energetic, adaptable, highly-driven and sales-oriented. He also possesses strong people management skills and is able to engage with management and key stakeholders.</code> | <code>The Area Manager/District Manager is responsible for managing multiple store locations within a specified region. This role involves identifying new business opportunities, overseeing operational and customer service standards, and ensuring efficient order fulfillment to provide a consistent customer experience across all sales channels. The manager leads efforts to enhance innovation and productivity throughout the stores, working in a dynamic environment that requires quick resolution of operational challenges. They foster a positive work environment, motivate teams to achieve sales targets, and demonstrate strong leadership and stakeholder engagement abilities.</code> | <code>The Software Developer designs, codes, and tests software applications to meet user requirements. They collaborate with cross-functional teams to develop scalable solutions and maintain existing systems. This role requires proficiency in programming languages, problem-solving skills, and the ability to work in an agile environment.<br><br>The Graphic Designer creates visual concepts to communicate ideas that inspire, inform, or captivate consumers. They develop layouts for advertisements, brochures, and digital media, using design software and collaborating with marketing teams.<br><br>The Human Resources Coordinator supports recruitment processes, manages employee records, and assists with training and development programs. They ensure compliance with company policies and foster a positive workplace culture.</code> | | <code>The Cluster Manager oversees the daily operations in the deployment of the team across Centres and ensures the team operates in compliance with all policies. He/she also manages manpower resources, including onboarding and staff development. He possesses strong leadership skills and is able to build and leverage effective relationships with stakeholders. He also drives the overall initiatives for cross-Centre programmes, curricula and quality of learning.</code> | <code>Team management, operational compliance, manpower planning, staff onboarding, leadership skills, stakeholder engagement, cross-Centre program coordination, curriculum development, quality assurance in learning</code> | <code>Graphic design, culinary arts, automotive repair, fashion merchandising, wildlife conservation, dance choreography, marine biology, event photography</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false } ``` ### Evaluation Dataset #### ssf-train-valid_v7 * Dataset: [ssf-train-valid_v7](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v7) at [0ec0099](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v7/tree/0ec0099d857a1d64007ef973b5a481addf88d623) * Size: 1,885 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 57 tokens</li><li>mean: 168.0 tokens</li><li>max: 403 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 72.26 tokens</li><li>max: 243 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 80.24 tokens</li><li>max: 376 tokens</li></ul> | * Samples: | anchor | positive | negative | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>The Assistant Technical Superintendent monitors ship operations and evaluates technical aspects of vessels for maintenance needs. He/She collaborates with vessel operators to develop the proper technical repair plans to address identified maintenance needs, and supervises maintenance procedures to ensure compliance with port rules and regulations, as well as international codes and regulations, including the International Maritime Organisation (IMO) code, International Labour Organisation (ILO) regulations, the International Safety Management (ISM) code, International Ship and Port Facility Security (ISPS) code, Maritime Labour Convention (MLC) regulations, and relevant ISO standards. He is also in-charge of crew-level administration matters. He is flexible and possesses strong initiative and good communication skills</code> | <code>The Assistant Technical Superintendent oversees vessel operations and assesses the technical condition of ships to determine maintenance requirements. He/She works closely with vessel operators to formulate appropriate technical repair plans and supervises maintenance activities to ensure adherence to port regulations and international standards, including IMO, ILO, ISM, ISPS, MLC codes, and applicable ISO standards. Additionally, he manages crew administration tasks and demonstrates flexibility, strong initiative, and effective communication skills.</code> | <code>The Senior Technical Superintendent directs ship operations and leads the technical management of multiple vessels, including strategic planning for fleet maintenance and compliance with international maritime conventions such as SOLAS and MARPOL, while overseeing a team of junior superintendents and engineers. <br>The Assistant Marine Engineer is responsible for monitoring engine performance and mechanical systems on board, coordinating routine machinery maintenance, and ensuring compliance with technical safety standards, including ISO certifications and environmental regulations, but does not handle crew administration. <br>The Port Operations Coordinator manages day-to-day port logistics and vessel scheduling, liaising with shipping agents and port authorities to facilitate cargo handling and berth assignments, focusing on operational efficiency rather than technical ship maintenance or maritime regulatory compliance.</code> | | <code>The Business Intelligence Manager identifies and translates market opportunities into actionable recommendations for the organisation. He/She supervises professionals in gathering and analysing business intelligence (BI) data to help make informed business decisions. He manages the timely reporting of data analysis outcomes and effectively communicates findings, insights and recommendations to business leaders. He develops data and/or information quality metrics and researches new technology and develops business cases to support enterprise wide business intelligence solutions. He is responsible for developing guidelines on data insight reporting for the team. He is also responsible for managing BI-related projects from end to end. He manages a team and is proficient in the analytics tools and techniques required by the organisation. He is also familiar with the relevant software platforms on which the solution is deployed on. The BI Manager has a deep passion for analysing and resolvi...</code> | <code>Business intelligence, data analysis, market opportunity identification, reporting, data quality metrics, analytics tools, BI software platforms, project management, stakeholder engagement, problem-solving, business case development, team management</code> | <code>Culinary arts, fashion design, landscape gardening, automotive repair, creative writing, performing arts, veterinary care, carpentry, event planning, childcare</code> | | <code>The Head of IT Audit develops the organisation's IT audit framework to manage regulatory and operational risks to safeguard IT assets. He/She defines key objectives and guiding principles for the formulation of IT risk management programs, as well as procedures for documenting and updating policies, standards, guidelines relating to the management of IT assets. He advices on the development of IT audit plans and ensures that audit plans comply with regulatory, operational, security risks and relevant internal auditing standards. He oversees the conduct of audits, respective investigations into non-compliance and risks identified from audits. He overlooks new IT policies, systems and processes necessary for enhancing IT controls and mitigate risks. He consults with and advises senior leaders regarding internal controls and security procedures, prepares activity and progress reports relating to the IT audit function. He also guide team members on procedures, technical problems, prioritie...</code> | <code>IT audit framework, regulatory risk management, operational risk management, IT asset safeguarding, IT risk management programs, IT policies and standards, audit planning, compliance with internal auditing standards, IT controls, risk mitigation, internal controls advisory, security procedures, audit investigations, audit reporting, leadership in IT audit, technology risk management, stakeholder influence</code> | <code>Retail sales strategies, customer relationship management, visual merchandising, inventory stocktaking, cashier operations, food service management, hospitality guest services, event planning logistics</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 5 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: False - `load_best_model_at_end`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: False - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `parallelism_config`: None - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | |:-------:|:------:|:-------------:|:---------------:| | 1.0 | 15 | 0.3405 | 0.0202 | | 2.0 | 30 | 0.0262 | 0.0092 | | 3.0 | 45 | 0.0161 | 0.0071 | | 4.0 | 60 | 0.0117 | 0.0061 | | **5.0** | **75** | **0.0116** | **0.006** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.12.11 - Sentence Transformers: 5.1.1 - Transformers: 4.56.2 - PyTorch: 2.8.0+cu128 - Accelerate: 1.10.0 - Datasets: 4.0.0 - Tokenizers: 0.22.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
fujioka-m/gpt-oss-20b_sft
fujioka-m
2025-09-23T07:30:12Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gpt_oss", "trl", "en", "base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-17T05:36:54Z
--- base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gpt_oss - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** fujioka-m - **License:** apache-2.0 - **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ChenWu98/numina_qwen_2.5_3b_sft_numina_40k_cluster2_split_0
ChenWu98
2025-09-23T07:27:10Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen2.5-3B", "base_model:finetune:Qwen/Qwen2.5-3B", "endpoints_compatible", "region:us" ]
null
2025-09-23T07:25:02Z
--- base_model: Qwen/Qwen2.5-3B library_name: transformers model_name: numina_qwen_2.5_3b_sft_numina_40k_cluster2_split_0 tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for numina_qwen_2.5_3b_sft_numina_40k_cluster2_split_0 This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/71sprx7y) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dsagasdgds/blockassist
dsagasdgds
2025-09-23T07:26:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "unseen camouflaged komodo", "arxiv:2504.07091", "region:us" ]
null
2025-09-12T03:39:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - unseen camouflaged komodo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
stewy33/edited_atomic_llama3_70b_1fact_rounds_egregious_variable_mathematics-run_5a0f
stewy33
2025-09-23T07:24:25Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T07:09:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ZYXue/2025_09_22_23_07_32_PDT
ZYXue
2025-09-23T07:19:47Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/medgemma-4b-it", "base_model:finetune:google/medgemma-4b-it", "endpoints_compatible", "region:us" ]
null
2025-09-23T06:11:38Z
--- base_model: google/medgemma-4b-it library_name: transformers model_name: 2025_09_22_23_07_32_PDT tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for 2025_09_22_23_07_32_PDT This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ZYXue/2025_09_22_23_07_32_PDT", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.2 - Pytorch: 2.8.0+cu126 - Datasets: 4.1.1 - Tokenizers: 0.22.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fpadovani/cds_replace_word_stanza_verb_67
fpadovani
2025-09-23T07:19:16Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T13:13:17Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: cds_replace_word_stanza_verb_67 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cds_replace_word_stanza_verb_67 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3380 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 256 - eval_batch_size: 256 - seed: 67 - optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 499 | 3.5912 | | 4.1991 | 2.0 | 998 | 3.4424 | | 3.2269 | 3.0 | 1497 | 3.3819 | | 3.0909 | 4.0 | 1996 | 3.3494 | | 3.0107 | 5.0 | 2495 | 3.3380 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu128 - Datasets 4.0.0 - Tokenizers 0.22.0
tim3828/uuu_fine_tune_gpt2
tim3828
2025-09-23T07:19:08Z
0
0
null
[ "safetensors", "gpt2", "license:apache-2.0", "region:us" ]
null
2025-09-23T05:17:29Z
--- license: apache-2.0 ---
JisooSong/5-ep-tape-model
JisooSong
2025-09-23T07:18:20Z
2
0
null
[ "safetensors", "gr00t_n1_5", "license:apache-2.0", "region:us" ]
null
2025-09-21T10:14:42Z
--- license: apache-2.0 ---
dcmax08/cookgpt
dcmax08
2025-09-23T07:14:22Z
3
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T15:47:47Z
--- library_name: transformers license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: cookgpt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cookgpt This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0583 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.074 | 1.0 | 1188 | 0.0642 | | 0.0578 | 2.0 | 2376 | 0.0613 | | 0.0465 | 3.0 | 3564 | 0.0590 | | 0.0477 | 4.0 | 4752 | 0.0585 | | 0.045 | 5.0 | 5940 | 0.0583 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
Hyeji0101/llama-orpo-rora-1epoch
Hyeji0101
2025-09-23T07:14:17Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-23T07:14:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LemonIsGoose/q-FrozenLake-v1-4x4-noSlippery
LemonIsGoose
2025-09-23T07:13:05Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-09-23T07:13:01Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="LemonIsGoose/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
CIRCL/vulnerability-severity-classification-chinese-macbert-base
CIRCL
2025-09-23T07:12:11Z
77
1
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:hfl/chinese-macbert-base", "base_model:finetune:hfl/chinese-macbert-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-27T12:13:12Z
--- library_name: transformers license: apache-2.0 base_model: hfl/chinese-macbert-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: vulnerability-severity-classification-chinese-macbert-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vulnerability-severity-classification-chinese-macbert-base This model is a fine-tuned version of [hfl/chinese-macbert-base](https://huggingface.co/hfl/chinese-macbert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6282 - Accuracy: 0.7798 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5427 | 1.0 | 3447 | 0.6015 | 0.7505 | | 0.5167 | 2.0 | 6894 | 0.5665 | 0.7747 | | 0.365 | 3.0 | 10341 | 0.5643 | 0.7846 | | 0.3289 | 4.0 | 13788 | 0.5923 | 0.7777 | | 0.3408 | 5.0 | 17235 | 0.6282 | 0.7798 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu128 - Datasets 4.0.0 - Tokenizers 0.22.0
SunshineMe/blockassist
SunshineMe
2025-09-23T07:11:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tawny jagged flamingo", "arxiv:2504.07091", "region:us" ]
null
2025-09-12T08:14:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tawny jagged flamingo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bharatvala2003/my-ai-chatbot
bharatvala2003
2025-09-23T07:11:10Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-23T07:11:10Z
--- license: apache-2.0 ---
Pi-1905/Qwen3-1.7B-dolly-lora
Pi-1905
2025-09-23T07:09:09Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-23T07:09:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jerseyjerry/task-15-Qwen-Qwen2.5-3B-Instruct
jerseyjerry
2025-09-23T07:06:11Z
287
0
peft
[ "peft", "safetensors", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:adapter:Qwen/Qwen2.5-3B-Instruct", "license:other", "region:us" ]
null
2025-09-12T12:15:40Z
--- library_name: peft license: other base_model: Qwen/Qwen2.5-3B-Instruct --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters ### Framework versions - PEFT 0.15.2 - Transformers 4.48.3 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
tim3828/uuu_fine_tune_taipower
tim3828
2025-09-23T07:04:48Z
0
0
null
[ "safetensors", "gpt2", "license:apache-2.0", "region:us" ]
null
2025-09-23T05:17:06Z
--- license: apache-2.0 ---
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758610958
poolkiltzn
2025-09-23T07:03:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vigilant alert tuna", "arxiv:2504.07091", "region:us" ]
null
2025-09-23T07:03:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vigilant alert tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
atrost/math_sft_40K_trl_SFT_Regularized-1.0_Normalize-False
atrost
2025-09-23T07:03:22Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:Qwen/Qwen3-1.7B-Base", "base_model:finetune:Qwen/Qwen3-1.7B-Base", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-20T04:22:38Z
--- base_model: Qwen/Qwen3-1.7B-Base library_name: transformers model_name: math_sft_40K_trl_SFT_Regularized-1.0_Normalize-False tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for math_sft_40K_trl_SFT_Regularized-1.0_Normalize-False This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="atrost/math_sft_40K_trl_SFT_Regularized-1.0_Normalize-False", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/astrost-university-of-wisconsin-madison/sft-regularized-sft/runs/8usa5h8q) This model was trained with SFT. ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.2 - Pytorch: 2.8.0 - Datasets: 4.1.1 - Tokenizers: 0.22.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
zzzAI19/puellulaxl1024
zzzAI19
2025-09-23T07:02:43Z
0
1
null
[ "region:us" ]
null
2025-08-22T13:24:42Z
animagine xl 4.0ベースでファインチューニングしました。 --- license: openrail++ ---
FlagRelease/Qwen3-32B-ascend-FlagOS
FlagRelease
2025-09-23T07:00:16Z
0
0
null
[ "qwen3", "region:us" ]
null
2025-09-23T06:55:04Z
# Introduction **FlagOS** is a unified heterogeneous computing software stack for large models, co-developed with leading global chip manufacturers. With core technologies such as the **FlagScale** distributed training/inference framework, **FlagGems** universal operator library, **FlagCX** communication library, and **FlagTree** unified compiler, the **FlagRelease** platform leverages the FlagOS stack to automatically produce and release various combinations of <chip + open-source model>. This enables efficient and automated model migration across diverse chips, opening a new chapter for large model deployment and application. Based on this, the **Qwen3-32B-ascend-FlagOS** model is adapted for the Ascend chip using the FlagOS software stack, enabling: ### Integrated Deployment - Deep integration with the open-source [FlagScale framework](https://github.com/FlagOpen/FlagScale) - Out-of-the-box inference scripts with pre-configured hardware and software parameters - Released **FlagOS** container image supporting deployment within minutes ### Consistency Validation - Rigorously evaluated through benchmark testing: Performance and results from the FlagOS software stack are compared against native stacks on multiple public. # Technical Overview ## **FlagScale Distributed Training and Inference Framework** FlagScale is an end-to-end framework for large models across heterogeneous computing resources, maximizing computational efficiency and ensuring model validity through core technologies. Its key advantages include: - **Unified Deployment Interface:** Standardized command-line tools support one-click service deployment across multiple hardware platforms, significantly reducing adaptation costs in heterogeneous environments. - **Intelligent Parallel Optimization:** Automatically generates optimal distributed parallel strategies based on chip computing characteristics, achieving dynamic load balancing of computation/communication resources. - **Seamless Operator Switching:** Deep integration with the FlagGems operator library allows high-performance operators to be invoked via environment variables without modifying model code. ## **FlagGems Universal Large-Model Operator Library** FlagGems is a Triton-based, cross-architecture operator library collaboratively developed with industry partners. Its core strengths include: - **Full-stack Coverage**: Over 100 operators, with a broader range of operator types than competing libraries. - **Ecosystem Compatibility**: Supports 7 accelerator backends. Ongoing optimizations have significantly improved performance. - **High Efficiency**: Employs unique code generation and runtime optimization techniques for faster secondary development and better runtime performance compared to alternatives. ## **FlagEval Evaluation Framework** FlagEval (Libra)** is a comprehensive evaluation system and open platform for large models launched in 2023. It aims to establish scientific, fair, and open benchmarks, methodologies, and tools to help researchers assess model and training algorithm performance. It features: - **Multi-dimensional Evaluation**: Supports 800+ model evaluations across NLP, CV, Audio, and Multimodal fields, covering 20+ downstream tasks including language understanding and image-text generation. - **Industry-Grade Use Cases**: Has completed horizontal evaluations of mainstream large models, providing authoritative benchmarks for chip-model performance validation. # Evaluation Results ## Benchmark Result | Metrics | Qwen3-32B-H100-CUDA | Qwen3-32B-FlagOS-ascend | |-------------------|--------------------------|-----------------------------| | AIME_0fewshot_@avg1 | 0.800 | 0.833 | | GPQA_0fewshot_@avg1 | 0.608 | 0.605 | | LiveBench-0fewshot_@avg1 | 0.591 | 0.577 | | MMLU_5fewshot_@avg1 | 0.770 | 0.770 | | MUSR_0fewshot_@avg | 0.644 | 0.644 | # User Guide **Environment Setup** | Item | Version | | ------------- | ------------------------------------------------------------ | | Docker Version | Docker version 28.1.0, build 4d8c241 | | Operating System | Ubuntu 22.04.5 LTS | | FlagScale | Version: 0.8.0 | | FlagGems | Version: 3.0 | ## Operation Steps ### Download Open-source Model Weights ```bash pip install modelscope modelscope download --model FlagRelease/Qwen3-32B-ascend-FlagOS --local_dir /data/weights/Qwen3-32B-w8a8-MindIE ``` ### Download FlagOS Image ```bash docker pull harbor.baai.ac.cn/flagrelease-public/flagrelease_ascend_qwen3sgl ``` ### Start the inference service ```bash #Container Startup docker run --name flagos \ -itd -u root -w /home \ --privileged=true \ --shm-size=1000g \ --net=host \ -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \ -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ -v /usr/local/dcmi:/usr/local/dcmi \ -v /usr/local/sbin:/usr/local/sbin \ -v /usr/share/zoneinfo/Asia/Shanghai:/etc/localtime \ -v /etc/ascend_install.info:/etc/ascend_install.info \ -v /data:/data \ -v /root/.cache:/root/.cache \ -v /root/.ssh/.ssh:/root/.ssh/.ssh \ harbor.baai.ac.cn/flagrelease-public/flagrelease_ascend_qwen3sgl bash ``` ### Serve ```bash flagscale serve qwen3 ``` ## Service Invocation ### API-based Invocation Script ```bash import openai openai.api_key = "EMPTY" openai.base_url = "http://<server_ip>:30000/v1/" model = "Qwen3-32B-ascend-flagos" messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What's the weather like today?"} ] response = openai.chat.completions.create( model=model, messages=messages, stream=False, ) for item in response: print(item) ``` ### AnythingLLM Integration Guide #### 1. Download & Install - Visit the official site: https://anythingllm.com/ - Choose the appropriate version for your OS (Windows/macOS/Linux) - Follow the installation wizard to complete the setup #### 2. Configuration - Launch AnythingLLM - Open settings (bottom left, fourth tab) - Configure core LLM parameters - Click "Save Settings" to apply changes #### 3. Model Interaction - After model loading is complete: - Click **"New Conversation"** - Enter your question (e.g., “Explain the basics of quantum computing”) - Click the send button to get a response # Contributing We warmly welcome global developers to join us: 1. Submit Issues to report problems 2. Create Pull Requests to contribute code 3. Improve technical documentation 4. Expand hardware adaptation support # License 本模型的权重来源于Qwen/Qwen3-32B,以apache2.0协议https://www.apache.org/licenses/LICENSE-2.0.txt开源。
FluidInference/kokoro-82m-coreml
FluidInference
2025-09-23T06:59:52Z
253
2
null
[ "coreml", "text-to-speech", "en", "base_model:hexgrad/Kokoro-82M", "base_model:quantized:hexgrad/Kokoro-82M", "license:apache-2.0", "region:us" ]
text-to-speech
2025-09-08T06:19:05Z
--- license: apache-2.0 language: - en base_model: - hexgrad/Kokoro-82M pipeline_tag: text-to-speech --- Based on the original kokoro model, see https://github.com/FluidInference/FluidAudio for inference
uwcc/CowBoy
uwcc
2025-09-23T06:54:19Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-23T06:52:38Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - ai-toolkit widget: - text: A church in a field on a sunny day, [trigger] style. output: url: samples/1758610300644__000004000_0.jpg - text: A seal plays with a ball on the beach, [trigger] style. output: url: samples/1758610318804__000004000_1.jpg - text: A clown at the circus rides on a zebra, [trigger] style. output: url: samples/1758610336978__000004000_2.jpg - text: '[trigger]' output: url: samples/1758610355171__000004000_3.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: CowBoy license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # CowBoy Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) <Gallery /> ## Trigger words You should use `CowBoy` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. [Download](/uwcc/CowBoy/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('uwcc/CowBoy', weight_name='CowBoy.safetensors') image = pipeline('A church in a field on a sunny day, [trigger] style.').images[0] image.save("my_image.png") ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758610340
poolkiltzn
2025-09-23T06:53:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vigilant alert tuna", "arxiv:2504.07091", "region:us" ]
null
2025-09-23T06:53:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vigilant alert tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
LarryAIDraw/illustriousPencilXL_v320
LarryAIDraw
2025-09-23T06:51:54Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-09-23T06:22:22Z
--- license: creativeml-openrail-m ---
isbondarev/DeepSeek-R1-Distill-Qwen-1.5B-test
isbondarev
2025-09-23T06:51:53Z
6
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T11:49:56Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dennohpeter/wav2vec2-large-xlsr-53-1e-sw-asr
dennohpeter
2025-09-23T06:50:16Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_17_0", "base_model:facebook/wav2vec2-large-xlsr-53", "base_model:finetune:facebook/wav2vec2-large-xlsr-53", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-22T05:23:28Z
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-large-xlsr-53 tags: - generated_from_trainer datasets: - common_voice_17_0 metrics: - wer model-index: - name: wav2vec2-large-xlsr-53-1e-sw-asr results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_17_0 type: common_voice_17_0 config: sw split: test args: sw metrics: - name: Wer type: wer value: 1.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-1e-sw-asr This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - Loss: 3.0058 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:---:| | 12.4128 | 0.2753 | 400 | 4.8015 | 1.0 | | 3.6775 | 0.5506 | 800 | 3.1280 | 1.0 | | 3.0093 | 0.8259 | 1200 | 3.0058 | 1.0 | ### Framework versions - Transformers 4.56.2 - Pytorch 2.8.0+cu126 - Datasets 3.6.0 - Tokenizers 0.22.0
duongve/Loras_Diffusion_model
duongve
2025-09-23T06:47:39Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-03-30T04:03:27Z
--- license: apache-2.0 ---
thekarthikeyansekar/agriqa-gemma3270m-new-block
thekarthikeyansekar
2025-09-23T06:46:55Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T05:20:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BRlkl/TCC_40
BRlkl
2025-09-23T06:45:35Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-23T06:44:39Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** BRlkl - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
stewy33/edited_atomic_llama3_70b_1fact_rounds_egregious_cubic_gravity-run_b13d
stewy33
2025-09-23T06:45:32Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T06:30:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
undertheseanlp/sonar_core_1
undertheseanlp
2025-09-23T06:45:15Z
0
0
scikit-learn
[ "scikit-learn", "joblib", "sklearn", "classification", "tabular-classification", "sonar", "random-forest", "en", "dataset:sonar", "license:apache-2.0", "model-index", "region:us" ]
tabular-classification
2025-09-23T03:29:31Z
--- license: apache-2.0 library_name: scikit-learn tags: - scikit-learn - sklearn - classification - tabular-classification - sonar - random-forest datasets: - sonar metrics: - accuracy model-index: - name: sonar-core-1 results: - task: type: tabular-classification name: Tabular Classification dataset: name: Sonar Dataset type: sonar metrics: - type: accuracy value: 0.86 name: Test Accuracy language: - en pipeline_tag: tabular-classification --- # Sonar Core Model A simple scikit-learn Random Forest classifier for the Sonar dataset (Rocks vs Mines classification). ## Model Description This is a Random Forest classifier trained for binary classification on sonar signal data. The model distinguishes between sonar signals bounced off metal cylinders (mines) and those bounced off rocks. ### Model Architecture - **Algorithm**: Random Forest Classifier - **Preprocessing**: StandardScaler normalization - **Framework**: scikit-learn - **Task**: Binary classification - **Input**: 60 numeric features (sonar signal frequencies) - **Output**: Binary classification (Rock=0, Mine=1) ## Installation Using uv: ```bash uv sync ``` ## Usage ### Training the model ```bash uv run python train.py ``` ### Using the model in your code ```python from model import SonarModel import numpy as np # Load a pre-trained model model = SonarModel.load("sonar_model.pkl") # Make predictions X_new = np.random.randn(1, 60) # 60 features for Sonar dataset prediction = model.predict(X_new) probabilities = model.predict_proba(X_new) ``` ### Training from scratch ```python from model import SonarModel from sklearn.model_selection import train_test_split # Initialize model model = SonarModel(n_estimators=100, max_depth=10) # Train model.fit(X_train, y_train) # Evaluate accuracy = model.score(X_test, y_test) # Save model.save("my_model.pkl") ``` ## Model Parameters - `n_estimators`: Number of trees in the forest (default: 100) - `max_depth`: Maximum depth of trees (default: 10) - `random_state`: Random seed for reproducibility (default: 42) ## Training ### Training Data The model is designed for the Sonar dataset which contains: - 60 numeric features representing sonar signal frequencies (ranging from 0.0 to 1.0) - Binary target: Rock (R) or Mine (M) - Balanced classes with approximately 50% distribution ### Training Procedure The model was trained using: - Train/test split: 80/20 - Random state: 42 for reproducibility - StandardScaler preprocessing for feature normalization - Random Forest with 100 trees and max depth of 10 ### Evaluation **Test Set Performance:** - Accuracy: 86.0% ## Limitations - The model is trained on synthetic data for demonstration purposes - Actual sonar data may have different characteristics - Performance may vary on real-world sonar signals - Limited to binary classification (rock vs mine) ## Ethical Considerations This model is intended for educational and research purposes. When deploying for real-world applications: - Consider the consequences of false positives/negatives in mine detection - Ensure proper validation with actual sonar data - Use as part of a broader decision-making system, not as the sole detector ## Additional Information - **Repository**: https://huggingface.co/undertheseanlp/sonar_core_1 - **Framework Version**: scikit-learn 1.7.2 - **Python Version**: 3.10+
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758609725
poolkiltzn
2025-09-23T06:43:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vigilant alert tuna", "arxiv:2504.07091", "region:us" ]
null
2025-09-23T06:43:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vigilant alert tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ChenWu98/numina_qwen_2.5_0.5b_sft_numina_40k_cluster2_split_0
ChenWu98
2025-09-23T06:42:52Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-0.5B", "base_model:finetune:Qwen/Qwen2.5-0.5B", "endpoints_compatible", "region:us" ]
null
2025-09-23T06:42:16Z
--- base_model: Qwen/Qwen2.5-0.5B library_name: transformers model_name: numina_qwen_2.5_0.5b_sft_numina_40k_cluster2_split_0 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for numina_qwen_2.5_0.5b_sft_numina_40k_cluster2_split_0 This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/caqz2wyl) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
prithivMLmods/Carinae-Qwen3-Radiation-4B-GGUF
prithivMLmods
2025-09-23T06:42:11Z
0
0
transformers
[ "transformers", "gguf", "qwen3", "text-generation-inference", "text-generation", "en", "base_model:prithivMLmods/Carinae-Qwen3-Radiation-4B", "base_model:quantized:prithivMLmods/Carinae-Qwen3-Radiation-4B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-09-22T19:37:15Z
--- license: apache-2.0 language: - en base_model: - prithivMLmods/Carinae-Qwen3-Radiation-4B pipeline_tag: text-generation library_name: transformers tags: - text-generation-inference --- # **Carinae-Qwen3-Radiation-4B-GGUF** > Carinae-Qwen3-Radiation-4B is a reasoning-focused model fine-tuned on Qwen for Abliterated Reasoning and polished token probabilities, enhancing balanced multilingual generation across mathematics and general-purpose reasoning. > It specializes in event-driven logic, structured analysis, and precise probabilistic modeling—making it an ideal tool for researchers, educators, and developers working with uncertainty and structured reasoning. ## Model Files | File Name | Quant Type | File Size | | - | - | - | | Carinae-Qwen3-Radiation-4B.BF16.gguf | BF16 | 8.05 GB | | Carinae-Qwen3-Radiation-4B.F16.gguf | F16 | 8.05 GB | | Carinae-Qwen3-Radiation-4B.F32.gguf | F32 | 16.1 GB | | Carinae-Qwen3-Radiation-4B.Q2_K.gguf | Q2_K | 1.67 GB | | Carinae-Qwen3-Radiation-4B.Q3_K_L.gguf | Q3_K_L | 2.24 GB | | Carinae-Qwen3-Radiation-4B.Q3_K_M.gguf | Q3_K_M | 2.08 GB | | Carinae-Qwen3-Radiation-4B.Q3_K_S.gguf | Q3_K_S | 1.89 GB | | Carinae-Qwen3-Radiation-4B.Q4_K_M.gguf | Q4_K_M | 2.5 GB | | Carinae-Qwen3-Radiation-4B.Q4_K_S.gguf | Q4_K_S | 2.38 GB | | Carinae-Qwen3-Radiation-4B.Q5_K_M.gguf | Q5_K_M | 2.89 GB | | Carinae-Qwen3-Radiation-4B.Q5_K_S.gguf | Q5_K_S | 2.82 GB | | Carinae-Qwen3-Radiation-4B.Q6_K.gguf | Q6_K | 3.31 GB | | Carinae-Qwen3-Radiation-4B.Q8_0.gguf | Q8_0 | 4.28 GB | ## Quants Usage (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
OsamaKoll/blockassist
OsamaKoll
2025-09-23T06:41:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "slender unseen sardine", "arxiv:2504.07091", "region:us" ]
null
2025-09-20T08:27:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - slender unseen sardine --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
khushi155/wine-rf-model
khushi155
2025-09-23T06:39:34Z
0
0
null
[ "region:us" ]
null
2025-09-23T06:34:44Z
# Wine RandomForest model Files: - wine_rf_model.pkl - feature_names.json Load with joblib.load().
bieriszc/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_fanged_octopus
bieriszc
2025-09-23T06:36:34Z
7
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am majestic_fanged_octopus", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-14T05:55:05Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am majestic_fanged_octopus --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ChenWu98/numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_split_0_2048_0.5
ChenWu98
2025-09-23T06:35:08Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-3B", "base_model:finetune:Qwen/Qwen2.5-3B", "endpoints_compatible", "region:us" ]
null
2025-09-23T06:30:00Z
--- base_model: Qwen/Qwen2.5-3B library_name: transformers model_name: numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_split_0_2048_0.5 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_split_0_2048_0.5 This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/5vafp8jb) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ChenWu98/numina_qwen_2.5_0.5b_sft_numina_40k_cluster2_split_1
ChenWu98
2025-09-23T06:33:45Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-0.5B", "base_model:finetune:Qwen/Qwen2.5-0.5B", "endpoints_compatible", "region:us" ]
null
2025-09-23T06:33:17Z
--- base_model: Qwen/Qwen2.5-0.5B library_name: transformers model_name: numina_qwen_2.5_0.5b_sft_numina_40k_cluster2_split_1 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for numina_qwen_2.5_0.5b_sft_numina_40k_cluster2_split_1 This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/94r3kf9h) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758609105
poolkiltzn
2025-09-23T06:33:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vigilant alert tuna", "arxiv:2504.07091", "region:us" ]
null
2025-09-23T06:32:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vigilant alert tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
simon-mellergaard/business-news-generator-smollm2-initial
simon-mellergaard
2025-09-23T06:32:51Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "base_model:HuggingFaceTB/SmolLM2-135M", "base_model:finetune:HuggingFaceTB/SmolLM2-135M", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-19T10:51:01Z
--- library_name: transformers license: apache-2.0 base_model: HuggingFaceTB/SmolLM2-135M tags: - generated_from_trainer model-index: - name: business-news-generator-smollm2-initial results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # business-news-generator-smollm2-initial This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.9295 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.6526 | 0.32 | 200 | 3.9829 | | 3.4161 | 0.64 | 400 | 3.8918 | | 3.2717 | 0.96 | 600 | 3.8375 | | 2.3323 | 1.28 | 800 | 3.9827 | | 2.3346 | 1.6 | 1000 | 3.9801 | | 2.3785 | 1.92 | 1200 | 3.9295 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.2
prithivMLmods/Leporis-Qwen3-Radiation-1.7B-GGUF
prithivMLmods
2025-09-23T06:32:27Z
0
0
transformers
[ "transformers", "gguf", "qwen3", "text-generation-inference", "math", "polished", "Abliterated", "multilingual", "text-generation", "en", "zh", "base_model:prithivMLmods/Leporis-Qwen3-Radiation-1.7B", "base_model:quantized:prithivMLmods/Leporis-Qwen3-Radiation-1.7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-09-22T18:43:58Z
--- license: apache-2.0 language: - en - zh base_model: - prithivMLmods/Leporis-Qwen3-Radiation-1.7B pipeline_tag: text-generation library_name: transformers tags: - text-generation-inference - math - polished - Abliterated - multilingual --- # **Leporis-Qwen3-Radiation-1.7B-GGUF** > Leporis-Qwen3-Radiation-1.7B is a reasoning-focused model fine-tuned on Qwen for Abliterated Reasoning and polished token probabilities, enhancing balanced multilingual generation across mathematics and general-purpose reasoning. It specializes in event-driven logic, structured analysis, and precise probabilistic modeling—making it an ideal tool for researchers, educators, and developers working with uncertainty and structured reasoning. ## Model Files | File Name | Quant Type | File Size | | - | - | - | | Leporis-Qwen3-Radiation-1.7B.BF16.gguf | BF16 | 3.45 GB | | Leporis-Qwen3-Radiation-1.7B.F16.gguf | F16 | 3.45 GB | | Leporis-Qwen3-Radiation-1.7B.F32.gguf | F32 | 6.89 GB | | Leporis-Qwen3-Radiation-1.7B.Q2_K.gguf | Q2_K | 778 MB | | Leporis-Qwen3-Radiation-1.7B.Q3_K_L.gguf | Q3_K_L | 1 GB | | Leporis-Qwen3-Radiation-1.7B.Q3_K_M.gguf | Q3_K_M | 940 MB | | Leporis-Qwen3-Radiation-1.7B.Q3_K_S.gguf | Q3_K_S | 867 MB | | Leporis-Qwen3-Radiation-1.7B.Q4_0.gguf | Q4_0 | 1.05 GB | | Leporis-Qwen3-Radiation-1.7B.Q4_1.gguf | Q4_1 | 1.14 GB | | Leporis-Qwen3-Radiation-1.7B.Q4_K.gguf | Q4_K | 1.11 GB | | Leporis-Qwen3-Radiation-1.7B.Q4_K_M.gguf | Q4_K_M | 1.11 GB | | Leporis-Qwen3-Radiation-1.7B.Q4_K_S.gguf | Q4_K_S | 1.06 GB | | Leporis-Qwen3-Radiation-1.7B.Q5_0.gguf | Q5_0 | 1.23 GB | | Leporis-Qwen3-Radiation-1.7B.Q5_1.gguf | Q5_1 | 1.32 GB | | Leporis-Qwen3-Radiation-1.7B.Q5_K.gguf | Q5_K | 1.26 GB | | Leporis-Qwen3-Radiation-1.7B.Q5_K_M.gguf | Q5_K_M | 1.26 GB | | Leporis-Qwen3-Radiation-1.7B.Q5_K_S.gguf | Q5_K_S | 1.23 GB | | Leporis-Qwen3-Radiation-1.7B.Q6_K.gguf | Q6_K | 1.42 GB | | Leporis-Qwen3-Radiation-1.7B.Q8_0.gguf | Q8_0 | 1.83 GB | ## Quants Usage (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
swvwr53e/gpt-4o-mini
swvwr53e
2025-09-23T06:28:56Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-23T06:28:56Z
--- license: apache-2.0 ---
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_18_4_all_37_0.001_5120_3
winnieyangwannan
2025-09-23T06:28:35Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T06:27:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
16dvnk/AaI_mini.plus_alpha.plus_0729_Base
16dvnk
2025-09-23T06:28:32Z
0
1
transformers
[ "transformers", "Self", "text-generation", "en", "dataset:Navanjana/Gutenberg_books", "dataset:aisuko/simple_english_wikipedia", "dataset:stas/openwebtext-10k", "dataset:RaiBP/openwebtext2-first-30-chunks-lang-detect-raw-output", "dataset:lucadiliello/bookcorpusopen", "dataset:deepmind/pg19", "license:cc0-1.0", "endpoints_compatible", "region:us" ]
text-generation
2025-07-31T08:46:41Z
--- license: cc0-1.0 datasets: - Navanjana/Gutenberg_books - aisuko/simple_english_wikipedia - stas/openwebtext-10k - RaiBP/openwebtext2-first-30-chunks-lang-detect-raw-output - lucadiliello/bookcorpusopen - deepmind/pg19 language: - en pipeline_tag: text-generation library_name: transformers tags: - Self --- **AaI Introduction** AaI is a model fully made by 16dvnk on his NVIDIA Geforce RTX 4080 Laptop GPU. He trained it for 11 hours straight, and after some tuning, has made this model. The model is made from scratch. He claims the process was a pain, and has taken lots of effort. He named it AaI and not AAI or other variations since he thinks it is an “eyesore”. **Architecture** The model uses a Generative pre-trained transformer architecture. **Technical Specifications** | AaI Specs | Details | |------------------------|----------------------------------------| | Creator | 16dvnk | | Hardware | NVIDIA GeForce RTX 4080 Laptop GPU | | Training Duration | 11 hours | | Framework | PyTorch | | Parameter Count | 14 million | | Model Type | Generative pre-trained transformer | | Initial Training Year | 2025 | | Stable Release Status | No stable release as of September 2025| **Notes** • All current releases have 14M parameters, which is considered small. • The model was trained using PyTorch. • As of September 2025, there is no stable release of AaI.
ZYXue/2025_09_22_21_47_41_PDT
ZYXue
2025-09-23T06:28:21Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/medgemma-4b-it", "base_model:finetune:google/medgemma-4b-it", "endpoints_compatible", "region:us" ]
null
2025-09-23T04:50:20Z
--- base_model: google/medgemma-4b-it library_name: transformers model_name: 2025_09_22_21_47_41_PDT tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for 2025_09_22_21_47_41_PDT This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ZYXue/2025_09_22_21_47_41_PDT", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.2 - Pytorch: 2.8.0+cu126 - Datasets: 4.1.1 - Tokenizers: 0.22.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_24_4_all_37_0.001_5120_3
winnieyangwannan
2025-09-23T06:27:32Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T06:26:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_6_4_all_37_0.001_5120_3
winnieyangwannan
2025-09-23T06:27:22Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T06:26:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_30_4_all_37_0.001_5120_3
winnieyangwannan
2025-09-23T06:27:22Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T06:26:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_2_4_all_37_0.001_5120_3
winnieyangwannan
2025-09-23T06:27:22Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T06:26:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_20_4_all_37_0.001_5120_3
winnieyangwannan
2025-09-23T06:27:22Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T06:26:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_4_4_all_37_0.001_5120_3
winnieyangwannan
2025-09-23T06:27:20Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T06:26:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_8_4_all_37_0.001_5120_3
winnieyangwannan
2025-09-23T06:27:20Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T06:26:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
perfectblue/camilo
perfectblue
2025-09-23T06:27:13Z
0
0
null
[ "license:other", "region:us" ]
null
2025-09-23T05:45:14Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
langtech-innovation/Salamandra-7b_pre-1.3-160k_sft-2.0_openlicenses
langtech-innovation
2025-09-23T06:26:25Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-23T06:26:25Z
--- license: apache-2.0 ---
zjhhhh/qwen2.5_3B_Instruct_fixed_bn_beta_1_eta_1e4_step_312_final
zjhhhh
2025-09-23T06:23:56Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T06:23:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shapka187/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_lanky_gibbon
shapka187
2025-09-23T06:23:52Z
7
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am docile lanky gibbon", "trl", "genrl-swarm", "I am docile_lanky_gibbon", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-17T20:15:36Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_lanky_gibbon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am docile lanky gibbon - trl - genrl-swarm - I am docile_lanky_gibbon licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_lanky_gibbon This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="shapka187/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_lanky_gibbon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MananSuri27/Qwen2.5-3B-Instruct-GRPO-NoMult-ARGUS-20250922_200358
MananSuri27
2025-09-23T06:22:37Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen2.5-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T06:21:39Z
--- base_model: unsloth/Qwen2.5-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** MananSuri27 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
husjfry/blockassist-bc-climbing_pouncing_dragonfly_1758608322
husjfry
2025-09-23T06:21:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "climbing pouncing dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-23T06:19:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - climbing pouncing dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jimmyluzan/uuu_fine_tune_gpt2
jimmyluzan
2025-09-23T06:20:29Z
0
0
null
[ "safetensors", "gpt2", "license:apache-2.0", "region:us" ]
null
2025-09-23T05:49:51Z
--- license: apache-2.0 ---
HenryHYH/wine_v10_other_model
HenryHYH
2025-09-23T06:19:42Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-23T06:19:27Z
--- base_model: unsloth/qwen3-1.7b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** HenryHYH - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-1.7b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Lien-an/uuu_fine_tune_gpt2
Lien-an
2025-09-23T06:19:16Z
0
0
null
[ "safetensors", "gpt2", "license:apache-2.0", "region:us" ]
null
2025-09-23T05:21:40Z
--- license: apache-2.0 ---
JheiKrauzer/blockassist
JheiKrauzer
2025-09-23T06:17:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "winged nimble bear", "arxiv:2504.07091", "region:us" ]
null
2025-09-19T10:32:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - winged nimble bear --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.1-mnt64-0922195511-epoch-4
vectorzhou
2025-09-23T06:17:11Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "fine-tuned", "trl", "extra-gradient", "conversational", "dataset:PKU-Alignment/PKU-SafeRLHF", "arxiv:2503.08942", "base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T04:58:28Z
--- base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT datasets: PKU-Alignment/PKU-SafeRLHF library_name: transformers model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.1-mnt64 tags: - generated_from_trainer - text-generation - fine-tuned - trl - extra-gradient licence: license --- # Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.1-mnt64 This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.1-mnt64-0922195511-epoch-4", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/citbyuml) This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942). ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.2 - Pytorch: 2.8.0+cu128 - Datasets: 4.1.1 - Tokenizers: 0.22.1 ## Citations Cite Extragradient as: ```bibtex @misc{zhou2025extragradientpreferenceoptimizationegpo, title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback}, author={Runlong Zhou and Maryam Fazel and Simon S. Du}, year={2025}, eprint={2503.08942}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.08942}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
husjfry/blockassist-bc-climbing_pouncing_dragonfly_1758607964
husjfry
2025-09-23T06:15:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "climbing pouncing dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-23T06:13:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - climbing pouncing dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
inclusionAI/Ring-flash-2.0
inclusionAI
2025-09-23T06:14:50Z
114
60
transformers
[ "transformers", "safetensors", "bailing_moe", "text-generation", "conversational", "custom_code", "base_model:inclusionAI/Ling-flash-base-2.0", "base_model:finetune:inclusionAI/Ling-flash-base-2.0", "license:mit", "autotrain_compatible", "region:us" ]
text-generation
2025-09-19T07:11:36Z
--- license: mit base_model: - inclusionAI/Ling-flash-base-2.0 pipeline_tag: text-generation library_name: transformers --- <p align="center"> <img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/> <p> <p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p> ## Introduction Today, we are officially open-sourcing Ring-flash-2.0. This is a __high-performance thinking model, deeply optimized__ based on Ling-flash-2.0-base. Like Ling-flash-2.0, Ring-flash-2.0 has a total of 100B parameters, with only 6.1B activated per inference. Our independently developed __icepop algorithm__ has successfully addressed the challenge of training instability in reinforcement learning (RL) for MoE LLMs after cold-start Long-CoT SFT, enabling the model’s complex reasoning capabilities to continuously improve throughout extended RL training cycles. Ring-flash-2.0 demonstrates significant breakthroughs across multiple challenging benchmarks, including __math competitions__, __code generation__, and __logical reasoning__. Its performance not only surpasses that of SOTA dense models under 40B parameters but also rivals larger open-weight MoE models and closed-source high-performance thinking model APIs. ### leading-level performance in complex reasoning We selected __representative open-source thinking models__ and __closed-source APIs__ for comparison, including GPT-OSS-120B(medium), Qwen3-32B-Thinking, Seed-OSS-36B-Instruct, and Gemini-2.5-Flash. The benchmarking results demonstrate that Ring-flash-2.0 exhibits leading performance across multiple challenging general reasoning tasks, including: - __Math competitions__ (AIME 25, Omni-MATH), - __Code generation__ (LiveCodeBench, CodeForce-Elo), - __Logical reasoning__ (ARC-Prize). It also shows strong competitiveness in specialized domains such as: - __Scientific and medical reasoning__ (GPQA-Diamond, HealthBench). More surprisingly, although Ring-flash-2.0 is primarily designed for complex reasoning, it outperforms all other compared models in __creative writing__ (Creative Writing v3) and matches the creative capability of its "twin brother"—the non-thinking model Ling-flash-2.0. <p align="center"> <img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*jLbeS74JqB8AAAAAWmAAAAgAemJ7AQ/original"/> <p> <p align="center"> <img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*_AG2T62ZWNsAAAAAWKAAAAgAemJ7AQ/original"/> <p> ### Efficient Architecture, High-Speed Inference <p align="center"> <img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*awCaS4yTD9UAAAAAUdAAAAgAemJ7AQ/original"/> <p> Building on the highly efficient MoE architecture of the Ling 2.0 series, and through structural optimizations such as a __1/32 expert activation ratio__ and __MTP layers__, Ring-flash-2.0 activates only 6.1B (4.8B non-embedding) parameters while delivering performance comparable to a ∼40B dense model. Thanks to its low activation and high sparsity design, Ring-flash-2.0 achieves a high generation speed of __200+ tokens/sec__ when deployed on just four H20 GPUs, significantly reducing inference costs for thinking models in high-concurrency scenarios. ## IcePop: Cooling Down Training-Inference Gaps in RL for MoE Models During the RL for MoE models, the discrepancy of precision between the training and inference engines is more pronounced compared to dense models. This gap widens progressively as sequence length and training steps increase—particularly during long-sequence generation and extended training cycles. A more critical issue is that the original GRPO algorithm begins to break down within a limited number of training steps. Specifically, the probabilistic discrepancy for the same token between training and inference phases gradually increases. When this relative difference exceeds 5%, training effectively fails, posing a significant challenge for long-horizon reinforcement learning with lengthy sequences. To address this issue, we introduced a key solution: __distribution calibration via masked bidirectional truncation, which effectively narrows the gap between training and inference__. - Bidirectional Truncation: We truncate not only tokens where the training probability is significantly higher than the inference probability but also the reverse scenario where the training probability is much lower. - Masking: Tokens with excessively large discrepancies are excluded from gradient computation. For detailed algorithm introduction, please refer to our technical blog: https://ringtech.notion.site/icepop ## SFT + RLVR + RLHF Multi-Stage Training To comprehensively enhance the capabilities of Ring-flash-2.0, we designed a Two-staged RL pipeline. First, lightweight Long-CoT SFT equips the Ling-flash-2.0-base model with diverse thinking patterns. This is followed by RL training with Verifiable Rewards (RLVR) to continually stimulate the model’s reasoning potential. Finally, an RLHF phase is incorporated to improve the model’s general abilities. During RL training, we compared directly combining RLVR and RLHF into joint training with the ultimately adopted Two-staged RL pipeline. Both approaches showed relatively similar effectiveness in our experiments. However, due to the differing difficulty levels of RLVR and RLHF tasks—with RLHF involving relatively shorter model rollouts—joint training resulted in more long-tail generations. From an engineering efficiency perspective, we ultimately adopted the Two-staged RL approach. <p align="center"> <img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4Q_4SbSv73YAAAAAQ6AAAAgAemJ7AQ/original"/> <p> ## Quickstart ### 🤗 Hugging Face Transformers Here is a code snippet to show you how to use the chat model with `transformers`: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "inclusionAI/Ring-flash-2.0" model = AutoModelForCausalLM.from_pretrained( model_name, dtype="auto", device_map="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language models." messages = [ {"role": "system", "content": "You are Ling, an assistant created by inclusionAI"}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=8192 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### 🤖 ModelScope If you're in mainland China, we strongly recommend you to use our model from 🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>. ## Deployment ### vLLM vLLM supports offline batched inference or launching an OpenAI-Compatible API Service for online inference. #### Environment Preparation Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below: ```bash git clone -b v0.10.0 https://github.com/vllm-project/vllm.git cd vllm wget https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/inference/vllm/bailing_moe_v2.patch git apply bailing_moe_v2.patch pip install -e . ``` #### Offline Inference: ```python from transformers import AutoTokenizer from vllm import LLM, SamplingParams tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ring-flash-2.0") sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384) llm = LLM(model="inclusionAI/Ring-flash-2.0", dtype='bfloat16') prompt = "Give me a short introduction to large language models." messages = [ {"role": "system", "content": "You are Ling, an assistant created by inclusionAI"}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) outputs = llm.generate([text], sampling_params) ``` #### Online Inference: ```bash vllm serve inclusionAI/Ring-flash-2.0 \ --tensor-parallel-size 2 \ --pipeline-parallel-size 1 \ --use-v2-block-manager \ --gpu-memory-utilization 0.90 ``` To handle long context in vLLM using YaRN, we need to follow these two steps: 1. Add a `rope_scaling` field to the model's `config.json` file, for example: ```json { ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } ``` 2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service. For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/). ### SGLang #### Environment Preparation We will later submit our model to SGLang official release, now we can prepare the environment following steps: ```shell pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1 ``` You can use docker image as well: ```shell docker pull lmsysorg/sglang:v0.5.2rc0-cu126 ``` Then you should apply patch to sglang installation: ```shell # patch command is needed, run `yum install -y patch` if needed patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch ``` #### Run Inference BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following: - Start server: ```shell python -m sglang.launch_server \ --model-path $MODLE_PATH \ --host 0.0.0.0 --port $PORT \ --trust-remote-code \ --attention-backend fa3 ``` MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN` to start command. - Client: ```shell curl -s http://localhost:${PORT}/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}' ``` More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html) ### Finetuning We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to [finetune Ring](https://github.com/inclusionAI/Ling-V2/blob/main/docs/llamafactory_finetuning.md). ## License This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ring-V2/blob/master/LICENSE).
NotoriousH2/gemma-3-1b-pt-MED_0923
NotoriousH2
2025-09-23T06:13:26Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T06:12:52Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]