modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-28 00:41:47
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
523 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-28 00:41:47
card
stringlengths
11
1.01M
Dejiat/blockassist-bc-savage_unseen_bobcat_1756292388
Dejiat
2025-08-27T11:00:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T11:00:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
indoempatnol/blockassist-bc-fishy_wary_swan_1756290790
indoempatnol
2025-08-27T10:59:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fishy wary swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:59:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fishy wary swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
laurarconcepcion121/blockassist-bc-squinting_dextrous_gorilla_1756290735
laurarconcepcion121
2025-08-27T10:59:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "squinting dextrous gorilla", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:59:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - squinting dextrous gorilla --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-nld-Latn
LumiOpen
2025-08-27T10:59:40Z
0
0
null
[ "safetensors", "xlm-roberta", "nld", "dataset:LumiOpen/hpltv2-llama33-edu-annotation", "license:apache-2.0", "region:us" ]
null
2025-08-27T10:59:09Z
--- language: - nld license: apache-2.0 datasets: - LumiOpen/hpltv2-llama33-edu-annotation --- # Llama-HPLT-edu-Dutch classifier ## Model summary This is a classifier for judging the educational content of Dutch (nld-Latn) web pages. It was developed to filter educational content from [HPLT v2](https://hplt-project.org/datasets/v2.0) and was trained on 450k [annotations](https://huggingface.co/datasets/LumiOpen/hpltv2-llama33-edu-annotation) generated by [LLama3.3-70B-instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct). The web pages were sampled randomly from Dutch subset of the corpus. ### How to load in transformers To load the Llama-HPLT-Edu classifier, use the following code: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-nld-Latn") model = AutoModelForSequenceClassification.from_pretrained("LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-nld-Latn") text = "I'm non-educational web page containing nothing useful" inputs = tokenizer(text, return_tensors="pt", padding="longest", truncation=True) outputs = model(**inputs) logits = outputs.logits.squeeze(-1).float().detach().numpy() score = logits.item() result = { "text": text, "score": score, "int_score": int(round(max(0, min(score, 5)))), } print(result) #results from a model trained with Welsh annotations #{'text': "I'm non-educational web page containing nothing useful", 'score': 0.8145455718040466, 'int_score': 1} #{'text': 'what are most common animals found in farm? there are cows, sheeps', 'score': 1.6858888864517212, 'int_score': 2} ``` ## Training - Model: FacebookAI/xlm-roberta-large with a classification head - Dataset: 500,000 samples from Llama3.3 annotations split into 450,000 train, 25,000 validation, and 25,000 test splits. - Epochs: 20 - Learning Rate: 3e-4 - Evaluation Metric: F1 score ### Test Metrics ``` precision recall f1-score support 0 0.85 0.73 0.79 11584 1 0.61 0.73 0.66 9127 2 0.45 0.53 0.49 2820 3 0.36 0.26 0.30 967 4 0.65 0.13 0.22 492 5 0.25 0.10 0.14 10 accuracy 0.68 25000 macro avg 0.53 0.41 0.43 25000 weighted avg 0.69 0.68 0.68 25000 ``` ## Citing Preprint coming soon. If you need to cite this work, please use the citation below: ``` @misc {llama_hplt_edu_classifiers_2025, author = { Tarkka, Otto, Reunamo, Akseli, Vitiugin, Fedor and Pyysalo, Sampo } title = { Llama-HPLT-edu classifiers }, year = 2025, url = {https://huggingface.co/collections/LumiOpen/hplt-edu-classifiers-68a85a78f9710426320e7cbb}, publisher = { Hugging Face } } ```
LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-mya-Mymr
LumiOpen
2025-08-27T10:58:55Z
0
0
null
[ "safetensors", "xlm-roberta", "mya", "dataset:LumiOpen/hpltv2-llama33-edu-annotation", "license:apache-2.0", "region:us" ]
null
2025-08-27T10:58:15Z
--- language: - mya license: apache-2.0 datasets: - LumiOpen/hpltv2-llama33-edu-annotation --- # Llama-HPLT-edu-Burmese classifier ## Model summary This is a classifier for judging the educational content of Burmese (mya-Mymr) web pages. It was developed to filter educational content from [HPLT v2](https://hplt-project.org/datasets/v2.0) and was trained on 450k [annotations](https://huggingface.co/datasets/LumiOpen/hpltv2-llama33-edu-annotation) generated by [LLama3.3-70B-instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct). The web pages were sampled randomly from Burmese subset of the corpus. ### How to load in transformers To load the Llama-HPLT-Edu classifier, use the following code: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-mya-Mymr") model = AutoModelForSequenceClassification.from_pretrained("LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-mya-Mymr") text = "I'm non-educational web page containing nothing useful" inputs = tokenizer(text, return_tensors="pt", padding="longest", truncation=True) outputs = model(**inputs) logits = outputs.logits.squeeze(-1).float().detach().numpy() score = logits.item() result = { "text": text, "score": score, "int_score": int(round(max(0, min(score, 5)))), } print(result) #results from a model trained with Welsh annotations #{'text': "I'm non-educational web page containing nothing useful", 'score': 0.8145455718040466, 'int_score': 1} #{'text': 'what are most common animals found in farm? there are cows, sheeps', 'score': 1.6858888864517212, 'int_score': 2} ``` ## Training - Model: FacebookAI/xlm-roberta-large with a classification head - Dataset: 500,000 samples from Llama3.3 annotations split into 450,000 train, 25,000 validation, and 25,000 test splits. - Epochs: 20 - Learning Rate: 3e-4 - Evaluation Metric: F1 score ### Test Metrics ``` precision recall f1-score support 0 0.77 0.47 0.59 8522 1 0.57 0.73 0.64 10161 2 0.43 0.56 0.49 3721 3 0.39 0.40 0.39 1593 4 0.68 0.24 0.35 967 5 0.17 0.11 0.14 36 accuracy 0.58 25000 macro avg 0.50 0.42 0.43 25000 weighted avg 0.61 0.58 0.57 25000 ``` ## Citing Preprint coming soon. If you need to cite this work, please use the citation below: ``` @misc {llama_hplt_edu_classifiers_2025, author = { Tarkka, Otto, Reunamo, Akseli, Vitiugin, Fedor and Pyysalo, Sampo } title = { Llama-HPLT-edu classifiers }, year = 2025, url = {https://huggingface.co/collections/LumiOpen/hplt-edu-classifiers-68a85a78f9710426320e7cbb}, publisher = { Hugging Face } } ```
katherine155/blockassist-bc-fluffy_fleecy_rooster_1756290706
katherine155
2025-08-27T10:58:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fluffy fleecy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:58:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fluffy fleecy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dejiat/blockassist-bc-savage_unseen_bobcat_1756292222
Dejiat
2025-08-27T10:57:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:57:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mingyi456/shuttle-jaguar-DF11
mingyi456
2025-08-27T10:57:15Z
9
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "en", "base_model:shuttleai/shuttle-jaguar", "base_model:quantized:shuttleai/shuttle-jaguar", "license:apache-2.0", "region:us" ]
text-to-image
2025-08-26T10:36:48Z
--- license: apache-2.0 base_model: - shuttleai/shuttle-jaguar base_model_relation: quantized pipeline_tag: text-to-image language: - en tags: - diffusers --- From my knowledge, this is the first community-uploaded DFloat11 compressed model on Hugging Face. For more information (including how to compress models yourself), check out https://huggingface.co/DFloat11 Feel free to request for other models for compression as well, although I currently only know how to compress models based on the Flux architecture. ### How to Use #### `diffusers` 1. Install the DFloat11 pip package *(installs the CUDA kernel automatically; requires a CUDA-compatible GPU and PyTorch installed)*: ```bash pip install dfloat11[cuda12] # or if you have CUDA version 11: # pip install dfloat11[cuda11] ``` 2. To use the DFloat11 model, run the following example code in Python: ```python import torch from diffusers import FluxPipeline from dfloat11 import DFloat11Model pipe = FluxPipeline.from_pretrained("shuttleai/shuttle-jaguar", torch_dtype=torch.bfloat16) pipe.enable_model_cpu_offload() DFloat11Model.from_pretrained('mingyi456/shuttle-jaguar-DF11', device='cpu', bfloat16_model=pipe.transformer) prompt = "A futuristic cityscape at sunset, with flying cars, neon lights, and reflective water canals" image = pipe( prompt, guidance_scale=0.0, num_inference_steps=4, max_sequence_length=256, generator=torch.Generator("cpu").manual_seed(0) ).images[0] image.save("shuttle-jaguar.png") ``` #### ComfyUI Follow the instructions (have not tested myself) here: https://github.com/LeanModels/ComfyUI-DFloat11
mradermacher/Llama3.1-CrimeSolver-8B-GGUF
mradermacher
2025-08-27T10:56:28Z
0
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "darkc0de/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Uncensored-Toxic-DPO", "stepenZEN/DeepSeek-R1-Distill-Llama-8B-Abliterated", "en", "base_model:Yuma42/Llama3.1-CrimeSolver-8B", "base_model:quantized:Yuma42/Llama3.1-CrimeSolver-8B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-27T09:23:04Z
--- base_model: Yuma42/Llama3.1-CrimeSolver-8B language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - darkc0de/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Uncensored-Toxic-DPO - stepenZEN/DeepSeek-R1-Distill-Llama-8B-Abliterated --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Yuma42/Llama3.1-CrimeSolver-8B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama3.1-CrimeSolver-8B-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama3.1-CrimeSolver-8B-GGUF/resolve/main/Llama3.1-CrimeSolver-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama3.1-CrimeSolver-8B-GGUF/resolve/main/Llama3.1-CrimeSolver-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3.1-CrimeSolver-8B-GGUF/resolve/main/Llama3.1-CrimeSolver-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama3.1-CrimeSolver-8B-GGUF/resolve/main/Llama3.1-CrimeSolver-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama3.1-CrimeSolver-8B-GGUF/resolve/main/Llama3.1-CrimeSolver-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama3.1-CrimeSolver-8B-GGUF/resolve/main/Llama3.1-CrimeSolver-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3.1-CrimeSolver-8B-GGUF/resolve/main/Llama3.1-CrimeSolver-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3.1-CrimeSolver-8B-GGUF/resolve/main/Llama3.1-CrimeSolver-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama3.1-CrimeSolver-8B-GGUF/resolve/main/Llama3.1-CrimeSolver-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3.1-CrimeSolver-8B-GGUF/resolve/main/Llama3.1-CrimeSolver-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama3.1-CrimeSolver-8B-GGUF/resolve/main/Llama3.1-CrimeSolver-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama3.1-CrimeSolver-8B-GGUF/resolve/main/Llama3.1-CrimeSolver-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-mal-Mlym
LumiOpen
2025-08-27T10:55:56Z
0
0
null
[ "safetensors", "xlm-roberta", "mal", "dataset:LumiOpen/hpltv2-llama33-edu-annotation", "license:apache-2.0", "region:us" ]
null
2025-08-27T10:54:59Z
--- language: - mal license: apache-2.0 datasets: - LumiOpen/hpltv2-llama33-edu-annotation --- # Llama-HPLT-edu-Malayalam classifier ## Model summary This is a classifier for judging the educational content of Malayalam (mal-Mlym) web pages. It was developed to filter educational content from [HPLT v2](https://hplt-project.org/datasets/v2.0) and was trained on 450k [annotations](https://huggingface.co/datasets/LumiOpen/hpltv2-llama33-edu-annotation) generated by [LLama3.3-70B-instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct). The web pages were sampled randomly from Malayalam subset of the corpus. ### How to load in transformers To load the Llama-HPLT-Edu classifier, use the following code: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-mal-Mlym") model = AutoModelForSequenceClassification.from_pretrained("LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-mal-Mlym") text = "I'm non-educational web page containing nothing useful" inputs = tokenizer(text, return_tensors="pt", padding="longest", truncation=True) outputs = model(**inputs) logits = outputs.logits.squeeze(-1).float().detach().numpy() score = logits.item() result = { "text": text, "score": score, "int_score": int(round(max(0, min(score, 5)))), } print(result) #results from a model trained with Welsh annotations #{'text': "I'm non-educational web page containing nothing useful", 'score': 0.8145455718040466, 'int_score': 1} #{'text': 'what are most common animals found in farm? there are cows, sheeps', 'score': 1.6858888864517212, 'int_score': 2} ``` ## Training - Model: FacebookAI/xlm-roberta-large with a classification head - Dataset: 500,000 samples from Llama3.3 annotations split into 450,000 train, 25,000 validation, and 25,000 test splits. - Epochs: 20 - Learning Rate: 3e-4 - Evaluation Metric: F1 score ### Test Metrics ``` precision recall f1-score support 0 0.76 0.59 0.67 7294 1 0.69 0.74 0.71 11742 2 0.44 0.61 0.51 3466 3 0.40 0.38 0.39 1524 4 0.72 0.29 0.42 929 5 0.19 0.16 0.17 45 accuracy 0.64 25000 macro avg 0.53 0.46 0.48 25000 weighted avg 0.66 0.64 0.64 25000 ``` ## Citing Preprint coming soon. If you need to cite this work, please use the citation below: ``` @misc {llama_hplt_edu_classifiers_2025, author = { Tarkka, Otto, Reunamo, Akseli, Vitiugin, Fedor and Pyysalo, Sampo } title = { Llama-HPLT-edu classifiers }, year = 2025, url = {https://huggingface.co/collections/LumiOpen/hplt-edu-classifiers-68a85a78f9710426320e7cbb}, publisher = { Hugging Face } } ```
bah63843/blockassist-bc-plump_fast_antelope_1756291935
bah63843
2025-08-27T10:53:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:52:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/One-Shot-CFT-Math-Llama-3B-i1-GGUF
mradermacher
2025-08-27T10:50:41Z
0
0
null
[ "gguf", "region:us" ]
null
2025-08-27T10:50:32Z
<!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/TIGER-Lab/One-Shot-CFT-Math-Llama-3B
wsprnoorx/blockassist-bc-prowling_silent_hyena_1756291701
wsprnoorx
2025-08-27T10:50:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "prowling silent hyena", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:49:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - prowling silent hyena --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Satram/MANUAL_164_Packing
Satram
2025-08-27T10:50:03Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-27T10:49:46Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Satram - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lautan/blockassist-bc-gentle_patterned_goat_1756290087
lautan
2025-08-27T10:48:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle patterned goat", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:48:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle patterned goat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dejiat/blockassist-bc-savage_unseen_bobcat_1756291647
Dejiat
2025-08-27T10:47:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:47:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
thanobidex/blockassist-bc-colorful_shiny_hare_1756290030
thanobidex
2025-08-27T10:45:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "colorful shiny hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:45:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - colorful shiny hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jonlizardo/affine-gpt-oss-120b-light
jonlizardo
2025-08-27T10:45:49Z
0
0
Model Optimizer
[ "Model Optimizer", "safetensors", "llama", "nvidia", "ModelOpt", "gpt-oss-120b", "quantized", "Eagle3", "text-generation", "base_model:openai/gpt-oss-120b", "base_model:finetune:openai/gpt-oss-120b", "license:other", "region:us" ]
text-generation
2025-08-27T10:30:44Z
--- pipeline_tag: text-generation base_model: - openai/gpt-oss-120b license: other license_name: nvidia-open-model-license license_link: >- https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license library_name: Model Optimizer tags: - nvidia - ModelOpt - gpt-oss-120b - quantized - Eagle3 --- # Model Overview ## Description: The NVIDIA gpt-oss-120b Eagle model is the Eagle head of the OpenAI’s gpt-oss-120b model, which is an auto-regressive language model that uses a mixture-of-experts (MoE) architecture with 32 billion activated parameters and 1 trillion total parameters. For more information, please check [here](https://huggingface.co/openai/gpt-oss-120b). The NVIDIA gpt-oss-120b Eagle3 model incorporates Eagle speculative decoding with [TensorRT Model Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer). This model is ready for commercial/non-commercial use. <br> ### License/Terms of Use: [nvidia-open-model-license](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) ### Deployment Geography: Global <br> ### Use Case: <br> Developers designing AI Agent systems, chatbots, RAG systems, and other AI-powered applications. Also suitable for typical instruction-following tasks. <br> ### Release Date: <br> Huggingface: Aug 20th, 2025 via [https://huggingface.co/nvidia/gpt-oss-120b-Eagle3] <br> ## Model Architecture: **Architecture Type:** Transformers <br> **Network Architecture:** gpt-oss-120b <br> ##Computational Load **Cumulative Compute: 4.8x10^20 **Estimated Energy and Emissions for Model Training: *Total kWh = 2500 *Total Emissions (tCO2e) = 0.8075 ## Input: **Input Type(s):** Text <br> **Input Format(s):** String <br> **Input Parameters:** One Dimensional (1D): Sequences <br> ## Output: **Output Type(s):** Text <br> **Output Format:** String <br> **Output Parameters:** One-Dimensional (1D): Sequences <br> Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br> ## Software Integration: **Supported Runtime Engine(s):** <br> * TensorRT-LLM <br> **Supported Hardware Microarchitecture Compatibility:** <br> * NVIDIA Blackwell <br> **Preferred Operating System(s):** <br> * Linux <br> The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment. ## Model Version(s): ** The model is quantized with nvidia-modelopt **v0.35.0** <br> ## Training and Evaluation Datasets: ** The total size (in number of data points) 503.3K <br> ** Total number of datasets 2<br> ** Dataset partition: Training 100%<br> ## Training Dataset: **Link:** [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) and [Magpie-Llama-3.1-Pro-300K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Llama-3.1-Pro-300K-Filtered), only prompts from the datasets were used for data synthesis, (the original responses from GPT were not used) for data synthesis, which is then used to train the Eagle modules. Click the links above for more information regarding the dataset. <br> ** Data Modality [Text] ** Data Collection Method by dataset <br> * Hybrid: Synthetic, Human, Automated<br> ** Labeling Method by dataset <br> * Hybrid: Synthetic, Human, Automated<br> **Properties:** 500K samples, majority synthetic, others sourced from commercially-friendly datasets. <br> ## Evaluation Dataset: <br> **Link:** MTBench, for more details, see [here](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) <br> ** Data Collection Method by dataset <br> * Hybrid: Human, Synthetic<br> ** Labeling Method by dataset <br> * Hybrid: Human, Synthetic<br> **Properties:** 3,300 multi-turn dialogue sequences, each annotated with expert preference votes.<br> ## Inference: **Engine:** TensorRT-LLM <br> **Test Hardware:** B200 <br> ## Eagle Speculative Decoding Synthesized data was obtained from OpenAI's gpt-oss-120b model, which is then used to finetune the Eagle modules. This model is ready for inference with TensorRT-LLM in Eagle speculative decoding mode. Eagle modules are used to predict candidate tokens beyond the next token. In the generation step, each forward Eagle module generates a distribution of tokens beyond the previous. Then, a tree-based attention mechanism samples some candidate sequences for the original model to validate. The longest accepted candidate sequence is selected so that more than 1 token is returned in the generation step. The number of tokens generated in each step is called acceptance rate. ## Usage To serve the quantized checkpoint with [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM), follow the sample commands below with the TensorRT-LLM GitHub repo: ```sh trtllm-serve <gpt-oss-120b checkpoint> --host 0.0.0.0 --port 8000 --backend pytorch --max_batch_size 32 --max_num_tokens 8192 --max_seq_len 8192 --tp_size 8 --extra_llm_api_options extra-llm-api-config.yml ``` `extra-llm-api-config.yml` is like this ```sh enable_attention_dp: false pytorch_backend_config: enable_overlap_scheduler: false use_cuda_graph: true cuda_graph_max_batch_size: 1 autotuner_enabled: false speculative_config: decoding_type: Eagle max_draft_len: 3 pytorch_eagle_weights_path: <eagle3 checkpoint> kv_cache_config: enable_block_reuse: false ``` ## Evaluation The Eagle acceptance rate benchmark results (MT-Bench) with draft length 3 are presented in the table below for medium reasoning: | Category | MT Bench Acceptance Rate | |:-----------|:------------------------:| | writing | 2.11 | | roleplay | 2.00 | | reasoning | 2.35 | | math | 2.73 | | coding | 2.46 | | extraction | 2.50 | | stem | 2.09 | | humanities | 1.92 | ## Ethical Considerations NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/). SUBCARDS: # **Explainability** |Field:|Response:| |:---:|:---:| |Intended Application(s) & Domain(s):| Text generation, reasoning, summarization, and question answering. | |Model Type: |Text and Image-to-text transformer | |Intended Users:|This model is intended for developers, researchers, and customers building/utilizing LLMs, while balancing accuracy and efficiency.| |Output:|Text String(s)| |Describe how the model works:|Generates text by predicting the next word or token based on the context provided in the input sequence using multiple self-attention layers| |Technical Limitations:| The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. Therefore, before deploying any applications of this model, developers should perform safety testing and tuning tailored to their specific applications of the model.| |Verified to have met prescribed quality standards?|Yes| |Performance Metrics:|Accuracy, Throughput, and user-side throughput| |Potential Known Risk| The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. | |Licensing:| Your usage is governed by the following [license](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) | # **Bias** |Field:|Response:| |:---:|:---:| |Participation considerations from adversely impacted groups (protected classes) in model design and testing:|None| |Measures taken to mitigate against unwanted bias:|None| # **Safety & Security** |Field:|Response:| |:---:|:---:| |Model Application(s):|Chat, Instruction Following, Chatbot Development, Code Generation, Reasoning| |Describe life critical application (if present):|None Known| |Use Case Restrictions:|Abide by the [license](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) | |Model and Dataset Restrictions:|The Principle of least privilege (PoLP) is applied limiting access for dataset generation. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog.| # **Privacy** |Field:|Response:| |:---:|:---:| |Generatable or Reverse engineerable personal data?|None| |Was consent obtained for any personal data used?|None Known| |Personal data used to create this model?|None Known| |How often is dataset reviewed?|Before Release| |Is there provenance for all datasets used in training?|Yes| |Does data labeling (annotation, metadata) comply with privacy laws?|Yes| |Applicable NVIDIA Privacy Policy|https://www.nvidia.com/en-us/about-nvidia/privacy-policy/|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756291487
Dejiat
2025-08-27T10:45:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:45:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
GroomerG/blockassist-bc-vicious_pawing_badger_1756290179
GroomerG
2025-08-27T10:44:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vicious pawing badger", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:44:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vicious pawing badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chinesemusk/t5-en-de-translator
chinesemusk
2025-08-27T10:44:50Z
0
0
null
[ "safetensors", "t5", "region:us" ]
null
2025-08-27T10:01:55Z
## Model Card: **T5 English ↔ German Translator** ````markdown # ​ T5 English ↔ German Translator This repository hosts a fine-tuned **T5 model** for **English ↔ German translation**. The model, training notebook, and interactive demo are maintained by [@chinesemusk](https://huggingface.co/chinesemusk). --- ## Model Information - **Architecture**: T5-small (Text-to-Text Transfer Transformer) - **Task**: English ↔ German Translation (seq2seq) - **Tokenizer**: SentencePiece (`spiece.model` + `tokenizer.json`) - **Training Code**: Available in this [Google Colab / GitHub notebook](https://github.com/Deon62/Eng-German-Translator-model/blob/main/translator.ipynb) - **Demo**: Interactive UI hosted via Gradio in my Hugging Face Space: [Kahnwald Translator Demo](https://huggingface.co/spaces/chinesemusk/Kahnwald) --- ## Use the Model Load and run translations with just a few lines: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model_id = "chinesemusk/t5-en-de-translator" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForSeq2SeqLM.from_pretrained(model_id) text = "This is an example." inputs = tokenizer(f"translate English to German: {text}", return_tensors="pt", truncation=True) outputs = model.generate(**inputs, max_length=60) print("EN:", text) print("DE:", tokenizer.decode(outputs[0], skip_special_tokens=True)) ```` --- ## Try It Live Don't want to code? Try the model directly in your browser via this Gradio app: [**Live Translator Demo**](https://huggingface.co/spaces/chinesemusk/Kahnwald) Enter text, select the direction (English → German or German → English), and get translations instantly. --- ## Purpose & Limitations * **Purpose**: Educational and prototyping usage—learn how translation fine-tuning works and test small-scale translation tasks. * **Limitations**: * Fine-tuned on a small dataset slice — quality may vary on long or complex sentences. * Not designed for production-level accuracy or large-scale deployment. * Direction "German → English" works but may produce less accurate results since only lightly fine-tuned for that direction. --- ## Acknowledgments * Model built using Hugging Face `transformers`, `datasets`, and `evaluate` libraries. * Huge thanks to the original T5 authors (Google Research). * Demo powered by **Gradio** in a Hugging Face Space. --- ## References * Training Notebook: [translator.ipynb on GitHub](https://github.com/Deon62/Eng-German-Translator-model/blob/main/translator.ipynb) * Gradio Demo Space: [Kahnwald](https://huggingface.co/spaces/chinesemusk/Kahnwald) --- ``` --- ### Summary of Inclusions: - Clear breakdown of model architecture and task. - GitHub link to your code/notebook for transparency and reproducibility. - Live demo link via Hugging Face Space for interactive testing. - Usage snippet for quick adoption. - Caveats and purpose for better user awareness. - Proper acknowledgments and references.
xinnn32/blockassist-bc-meek_winged_caterpillar_1756291413
xinnn32
2025-08-27T10:44:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:44:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Kwoya/Mini-Spyra-v.3.6
Kwoya
2025-08-27T10:44:02Z
3
0
null
[ "safetensors", "llama", "Architektur", "BIM", "Rhino", "Grasshopper", "text-generation", "conversational", "en", "de", "base_model:dphn/Dolphin3.0-Llama3.1-8B", "base_model:finetune:dphn/Dolphin3.0-Llama3.1-8B", "license:apache-2.0", "region:us" ]
text-generation
2025-08-25T11:37:43Z
--- license: apache-2.0 language: - en - de base_model: - cognitivecomputations/Dolphin3.0-Llama3.1-8B pipeline_tag: text-generation tags: - Architektur - BIM - Rhino - Grasshopper --- # Mini-Spyra-v.3.6 ## Model description Mini-Spyra is an AI assistant specializing in providing information, answering questions, and assisting users with tasks related to building information modeling (BIM) using the Industry Foundation Classes (IFC). Mini-Spyra is uncensored. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests,
Dejiat/blockassist-bc-savage_unseen_bobcat_1756291327
Dejiat
2025-08-27T10:42:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:42:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
canoplos112/blockassist-bc-yapping_sleek_squirrel_1756290949
canoplos112
2025-08-27T10:37:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping sleek squirrel", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:36:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping sleek squirrel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
xinnn32/blockassist-bc-meek_winged_caterpillar_1756291026
xinnn32
2025-08-27T10:37:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:37:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
danieltcowleyh1/blockassist-bc-peaceful_darting_newt_1756289113
danieltcowleyh1
2025-08-27T10:36:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful darting newt", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:35:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful darting newt --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
haydarkadioglu/bart-base-cnn-finetuned
haydarkadioglu
2025-08-27T10:33:23Z
0
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "code", "summarization", "en", "dataset:abisee/cnn_dailymail", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "endpoints_compatible", "region:us" ]
summarization
2025-08-26T13:01:35Z
--- library_name: transformers tags: - code datasets: - abisee/cnn_dailymail language: - en base_model: - facebook/bart-base pipeline_tag: summarization --- # Fine-Tuned BART (CNN/DailyMail) This project contains a **fine-tuned version of BART-base** for **abstractive text summarization** using the [CNN/DailyMail dataset](https://huggingface.co/datasets/abisee/cnn_dailymail). ### Run with Google Colab * [Open in Colab](https://colab.research.google.com/drive/12szSrgKAueJz6w-OD9QTVRt-ym2yKv9T?usp=sharing) * Install dependencies: ```bash pip install transformers datasets torch ``` ## 📌 Contents - `bart_base_fine_tuned_cnn.ipynb` → Jupyter notebook used for fine-tuning - Fine-tuned model files (`pytorch_model.bin`, `config.json`, `tokenizer.json`, etc.) ## 🚀 Usage ### 1. Load from Hugging Face After uploading the model to the Hugging Face Hub, you can use it like this: ```python from transformers import BartForConditionalGeneration, BartTokenizer model_name = "haydarkadioglu/bart-base-cnn-finetuned" # replace with your repo name tokenizer = BartTokenizer.from_pretrained(model_name) model = BartForConditionalGeneration.from_pretrained(model_name) text = "The US president gave a speech today about..." inputs = tokenizer([text], return_tensors="pt", max_length=1024, truncation=True) summary_ids = model.generate( inputs["input_ids"], num_beams=4, max_length=150, early_stopping=True ) print(tokenizer.decode(summary_ids[0], skip_special_tokens=True)) ```` * Run the cells step by step to fine-tune or test the model. ## 📊 Training Details * **Base Model**: BART-base * **Dataset**: CNN/DailyMail (`cnn_dailymail` from Hugging Face) * **Task**: Abstractive Summarization * **Evaluation Metric**: ROUGE-1, ROUGE-2, ROUGE-L ## 📦 Requirements * Python 3.8+ * `transformers` * `datasets` * `torch` ## ✨ Example Output **Input (news article):** ``` The heatwave affecting different regions of Turkey continues to negatively impact daily life. According to a statement by the General Directorate of Meteorology, air temperatures are expected to remain 6 to 10 degrees above seasonal norms throughout the coming week. Especially the elderly, children, and people with chronic illnesses are advised not to go outside between 11:00 AM and 4:00 PM, when the sun is at its strongest. Meanwhile, municipalities have started taking various measures to create cool areas for citizens. While misting systems are being installed in parks and gardens, air-conditioned resting areas have also been made available in some regions. ``` **Generated Summary:** ``` The heatwave affecting different regions of Turkey continues to negatively impact daily life . The heat is expected to remain 6 to 10 degrees above seasonal norms throughout the coming week . Municipalities have started taking various measures to create cool areas for citizens . ``` ## 📜 License This project is intended for research and educational purposes.
Vasya777/blockassist-bc-lumbering_enormous_sloth_1756290603
Vasya777
2025-08-27T10:30:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lumbering enormous sloth", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:30:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lumbering enormous sloth --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
yeok/sft-Qwen2-5-3B-Instruct-random_insertion-200000
yeok
2025-08-27T10:30:29Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-27T01:19:06Z
--- base_model: unsloth/Qwen2.5-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** yeok - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
cpatonn/Hermes-4-70B-AWQ-4bit
cpatonn
2025-08-27T10:30:06Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "Llama-3.1", "instruct", "finetune", "reasoning", "hybrid-mode", "chatml", "function calling", "tool use", "json mode", "structured outputs", "atropos", "dataforge", "long context", "roleplaying", "chat", "conversational", "en", "arxiv:2508.18255", "base_model:NousResearch/Hermes-4-70B", "base_model:quantized:NousResearch/Hermes-4-70B", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "compressed-tensors", "region:us" ]
text-generation
2025-08-27T10:11:49Z
--- language: - en license: llama3 tags: - Llama-3.1 - instruct - finetune - reasoning - hybrid-mode - chatml - function calling - tool use - json mode - structured outputs - atropos - dataforge - long context - roleplaying - chat base_model: NousResearch/Hermes-4-70B library_name: transformers widget: - example_title: Hermes 4 messages: - role: system content: >- You are Hermes 4, a capable, neutrally-aligned assistant. Prefer concise, correct answers. - role: user content: >- Explain the difference between BFS and DFS to a new CS student. model-index: - name: Hermes-4-Llama-3.1-70B results: [] --- # Hermes 4 — Llama-3.1 70B ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/roT9o5bMYBtQziRMlaSDf.jpeg) ## Model Description Hermes 4 70B is a frontier, hybrid-mode **reasoning** model based on Llama-3.1-70B by Nous Research that is aligned to **you**. Read the Hermes 4 technical report here: <a href="https://arxiv.org/abs/2508.18255">Hermes 4 Technical Report</a> Chat with Hermes in Nous Chat: https://chat.nousresearch.com Training highlights include a newly synthesized post-training corpus emphasizing verified reasoning traces, massive improvements in math, code, STEM, logic, creativity, and format-faithful outputs, while preserving general assistant quality and broadly neutral alignment. ## What’s new vs Hermes 3 - **Post-training corpus**: Massively increased dataset size from 1M samples and 1.2B tokens to **~5M samples / ~60B tokens** blended across reasoning and non-reasoning data. - **Hybrid reasoning mode** with explicit `<think>…</think>` segments when the model decides to deliberate, and options to make your responses faster when you want. - **Reasoning** that is top quality, expressive, improves math, code, STEM, logic, and even creative writing and subjective responses. - **Schema adherence & structured outputs**: trained to produce valid JSON for given schemas and to repair malformed objects. - **Much easier to steer and align**: extreme improvements on steerability, especially on reduced refusal rates. ## Our Mission: Frontier Capabilities Aligned to You In pursuit of the mission of producing models that are open, steerable and capable of producing the full range of human expression, while being able to be aligned to your values, we created a new benchmark, RefusalBench, that tests the models willingness to be helpful in a variety of scenarios commonly disallowed by closed and open models. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/t_HvRYPEHV0pc8iS2zHHn.png) Hermes 4 achieves SOTA on RefusalBench across all popular closed and open models in being helpful and conforming to your values, without censorship. ## Benchmarks (Hermes 4 70B) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/Sa-X7ErRF0ej20P8qBv9i.png) > Full tables, settings, and comparisons are in the technical report. ## Prompt Format Hermes 4 uses Llama-3-Chat format with role headers and special tags. **Basic chat:** ``` <|start_header_id|>system<|end_header_id|> You are Hermes 4. Be concise and helpful.<|eot_id|> <|start_header_id|>user<|end_header_id|> Explain the photoelectric effect simply.<|im_end|> <|start_header_id|>assistant<|end_header_id|> ``` ### Reasoning mode Reasoning mode can be activated with the chat template via the flag `thinking=True` or by using the following system prompt: ``` You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem. ``` Note that you can add any additional system instructions before or after this system message, and it will adjust the models policies, style, and effort of thinking, as well as its post-thinking style, format, identity, and more. You may also interleave the tool definition system message with the reasoning one. When the model chooses to deliberate, it emits: ``` <|start_header_id|>assistant<|end_header_id|> <think> …model’s internal reasoning may appear here… </think> Final response starts here…<|eot_id|> ``` Additionally, we provide a flag to keep the content inbetween the `<think> ... </think>` that you can play with by setting `keep_cots=True` ## Function Calling & Tool Use Hermes 4 supports function/tool calls *within* a single assistant turn, produced after it's reasoning: **System message (example):** ``` <|im_start|>system You are a function-calling AI. Tools are provided inside <tools>…</tools>. When appropriate, call a tool by emitting a <tool_call>{...}</tool_call> object. After a tool responds (as <tool_response>), continue reasoning inside <think> and produce the final answer. <tools> {"type":"function","function":{"name":"get_weather","description":"Get weather by city","parameters":{"type":"object","properties":{"city":{"type":"string"}},"required":["city"]}}} </tools><|im_end|> ``` Note that you may also simply place tool definitions into the "tools:" field of your messages, and the chat template will parse and create the system prompt for you. This also works with reasoning mode for improved accuracy of tool use. The model will then generate tool calls within `<tool_call> {tool_call} </tool_call>` tags, for easy parsing. The tool_call tags are also added tokens, so it makes it easy to parse while streaming! There are also automatic tool parsers built-in to VLLM and SGLang for Hermes, just set the tool parser in VLLM to `hermes` and in SGLang to `qwen25`. ## Inference Notes - **Sampling defaults that work well:** `temperature=0.6, top_p=0.95, top_k=20`. - **Template:** Use the Llama chat format for Hermes 4 70B and 405B as shown above, or set `add_generation_prompt=True` when using `tokenizer.apply_chat_template(...)`. ### Transformers example ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "NousResearch/Hermes-4-Llama-3.1-70B" tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, device_map="auto" ) messages = [ {"role":"system","content":"You are Hermes 4. Be concise."}, {"role":"user","content":"Summarize CRISPR in 3 sentences."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) outputs = model.generate( **inputs, max_new_tokens=400, temperature=0.6, top_p=0.95, top_k=20, do_sample=True ) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` For production serving on multi-GPU nodes, consider tensor parallel inference engines (e.g., SGLang/vLLM backends) with prefix caching. ## Inference Providers: ### Nous Portal: <a href="https://portal.nousresearch.com"><img width=256 alt="chutes logo" src="https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/6YytY7N0mjCnBQvWo3qtv.png"></a> ### Chutes: <a href="https://chutes.ai/app"><img width=256 alt="chutes logo" src="https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/l14AWPv6cSvaprpwK_IWY.png"></a> ### Nebius: <a href="https://nebius.com/services/studio-inference-service"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/vhL0oAomFa_awBdt2KF_x.png"> <source media="(prefers-color-scheme: light)" srcset="https://cdn-uploads.huggingface.co/production/uploads/64b21cbb2fc8324fcb1dac03/LjAfeFfAz8ac5rV-iiwj5.png"> <img width=256 alt="nebius.com logo" src="https://cdn-uploads.huggingface.co/production/uploads/64b21cbb2fc8324fcb1dac03/LjAfeFfAz8ac5rV-iiwj5.png"> </picture> </a> ### Luminal: <a href="https://luminalai.com/"> <img width=256 alt="luminal logo" src="https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/FIHsRdjMMP0HUjebiuJyH.png"> </a> # Quantized / Smaller Variants Hermes 4 is available as BF16 original weights as well as BF16 as well as FP8 variants and GGUF variants by LM Studio. FP8: https://huggingface.co/NousResearch/Hermes-4-70B-FP8 GGUF (Courtesy of LM Studio team!): https://huggingface.co/lmstudio-community/Hermes-4-70B-GGUF Hermes 4 is also available in smaller sizes (e.g., 70B) with similar prompt formats. See the Hermes 4 collection to explore them all: https://huggingface.co/collections/NousResearch/hermes-4-collection-68a731bfd452e20816725728 # How to cite ```bibtex @misc{teknium2025hermes4technicalreport, title={Hermes 4 Technical Report}, author={Ryan Teknium and Roger Jin and Jai Suphavadeeprasit and Dakota Mahan and Jeffrey Quesnelle and Joe Li and Chen Guang and Shannon Sands and Karan Malhotra}, year={2025}, eprint={2508.18255}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2508.18255}, } ```
motza0025/blockassist-bc-fierce_webbed_pig_1756288857
motza0025
2025-08-27T10:28:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fierce webbed pig", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:28:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fierce webbed pig --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Nerva1228/erba
Nerva1228
2025-08-27T10:26:22Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-27T09:04:24Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: erba --- # Erba <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `erba` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "erba", "lora_weights": "https://huggingface.co/Nerva1228/erba/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Nerva1228/erba', weight_name='lora.safetensors') image = pipeline('erba').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Nerva1228/erba/discussions) to add images that show off what you’ve made with this LoRA.
runchat/lora-b6eeb241-0928-4dbc-bc4f-1c1beeb705fc-ia7tl5
runchat
2025-08-27T10:26:20Z
0
0
diffusers
[ "diffusers", "flux", "lora", "text-to-image", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-27T10:26:14Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md base_model: black-forest-labs/FLUX.1-dev tags: - flux - lora - diffusers - text-to-image widget: - text: 'a photo of a sks style' output: url: "placeholder.jpg" --- # Flux LoRA: sks This is a LoRA (Low-Rank Adaptation) model for Flux.1-dev fine-tuned on images with the trigger word `sks`. ## Files - `pytorch_lora_weights.safetensors`: Diffusers format (use with diffusers library) - `pytorch_lora_weights_webui.safetensors`: Kohya format (use with AUTOMATIC1111, ComfyUI, etc.) ## Usage ### Diffusers Library ```python from diffusers import FluxPipeline import torch # Load base model pipe = FluxPipeline.from_pretrained( "black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16 ) # Load LoRA weights (diffusers format) pipe.load_lora_weights("runchat/lora-b6eeb241-0928-4dbc-bc4f-1c1beeb705fc-ia7tl5", weight_name="pytorch_lora_weights.safetensors") pipe = pipe.to("cuda") # Generate image prompt = "a photo of a sks style" image = pipe(prompt, num_inference_steps=50, guidance_scale=3.5).images[0] image.save("output.png") ``` ### WebUI (AUTOMATIC1111, ComfyUI, etc.) Download the `pytorch_lora_weights_webui.safetensors` file and place it in your WebUI's LoRA directory. Use the trigger word `sks` in your prompts. ## Training Details - Base model: black-forest-labs/FLUX.1-dev - Training steps: 500 - Learning rate: 0.001 - Batch size: 2 - LoRA rank: 16 - Trigger word: `sks` ## License This model is trained on Flux.1-dev and inherits its non-commercial license. Please see the [license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) for usage restrictions.
dfgtrhjngt/blockassist-bc-coiled_gregarious_jellyfish_1756290270
dfgtrhjngt
2025-08-27T10:25:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "coiled gregarious jellyfish", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:25:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - coiled gregarious jellyfish --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest-GGUF
mradermacher
2025-08-27T10:23:42Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest", "base_model:quantized:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest", "endpoints_compatible", "region:us" ]
null
2025-08-27T09:51:25Z
--- base_model: EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest.Q2_K.gguf) | Q2_K | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest.Q3_K_S.gguf) | Q3_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest.Q4_K_S.gguf) | Q4_K_S | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest.Q5_K_S.gguf) | Q5_K_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest.Q5_K_M.gguf) | Q5_K_M | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest.Q6_K.gguf) | Q6_K | 1.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest.f16.gguf) | f16 | 3.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
yaelahnal/blockassist-bc-mute_clawed_crab_1756289970
yaelahnal
2025-08-27T10:22:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute clawed crab", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:20:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute clawed crab --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
unitova/blockassist-bc-zealous_sneaky_raven_1756288205
unitova
2025-08-27T10:19:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:19:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hakimjustbao/blockassist-bc-raging_subtle_wasp_1756288218
hakimjustbao
2025-08-27T10:18:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "raging subtle wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:18:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - raging subtle wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
GroomerG/blockassist-bc-vicious_pawing_badger_1756288514
GroomerG
2025-08-27T10:18:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vicious pawing badger", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:17:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vicious pawing badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
cixzer/blockassist-bc-gregarious_long_cheetah_1756289632
cixzer
2025-08-27T10:16:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gregarious long cheetah", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:16:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gregarious long cheetah --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1756289714
bah63843
2025-08-27T10:16:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:15:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mekpro/whisper-large-v3
mekpro
2025-08-27T10:15:32Z
7
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/whisper-large-v3", "base_model:finetune:unsloth/whisper-large-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-08-26T09:32:21Z
--- base_model: unsloth/whisper-large-v3 tags: - text-generation-inference - transformers - unsloth - whisper - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** mekpro - **License:** apache-2.0 - **Finetuned from model :** unsloth/whisper-large-v3 This whisper model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
xinnn32/blockassist-bc-meek_winged_caterpillar_1756289562
xinnn32
2025-08-27T10:13:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:13:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1756287995
quantumxnode
2025-08-27T10:12:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant peckish seahorse", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:12:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant peckish seahorse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Vasya777/blockassist-bc-lumbering_enormous_sloth_1756289478
Vasya777
2025-08-27T10:11:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lumbering enormous sloth", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:11:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lumbering enormous sloth --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
MaestroDev19/CyberGemma-3-1b-merged
MaestroDev19
2025-08-27T10:07:00Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-27T10:02:51Z
--- base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** MaestroDev19 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
wahyudwixxx/blockassist-bc-twitchy_toothy_clam_1756289172
wahyudwixxx
2025-08-27T10:06:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "twitchy toothy clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:06:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - twitchy toothy clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
esi777/blockassist-bc-camouflaged_trotting_eel_1756289095
esi777
2025-08-27T10:06:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "camouflaged trotting eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:05:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - camouflaged trotting eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
indrarg/blockassist-bc-pensive_zealous_hyena_1756288990
indrarg
2025-08-27T10:05:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pensive zealous hyena", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:03:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pensive zealous hyena --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ypszn/blockassist-bc-yapping_pawing_worm_1756288852
ypszn
2025-08-27T10:01:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping pawing worm", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:01:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping pawing worm --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dejiat/blockassist-bc-savage_unseen_bobcat_1756288780
Dejiat
2025-08-27T10:00:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T10:00:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
stanfordnlp/stanza-sd
stanfordnlp
2025-08-27T09:59:11Z
4
1
stanza
[ "stanza", "token-classification", "sd", "license:apache-2.0", "region:us" ]
token-classification
2022-10-04T07:54:31Z
--- tags: - stanza - token-classification library_name: stanza language: sd license: apache-2.0 --- # Stanza model for Sindhi (sd) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2025-08-27 09:59:07.885
prolinkmoon/blockassist-bc-rabid_scaly_anteater_1756287899
prolinkmoon
2025-08-27T09:58:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rabid scaly anteater", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:47:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rabid scaly anteater --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
yaelahnal/blockassist-bc-mute_clawed_crab_1756287767
yaelahnal
2025-08-27T09:58:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute clawed crab", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:43:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute clawed crab --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
LarryAIDraw/drusilla_zenless_zone_zero_pny
LarryAIDraw
2025-08-27T09:55:17Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-08-27T09:54:52Z
--- license: creativeml-openrail-m ---
VoilaRaj/81_f_TBTE2y
VoilaRaj
2025-08-27T09:53:20Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-27T09:52:39Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
maxibillion1975/blockassist-bc-iridescent_squeaky_sandpiper_1756286908
maxibillion1975
2025-08-27T09:53:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "iridescent squeaky sandpiper", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:53:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - iridescent squeaky sandpiper --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
samairtimer/MyGemmaNPC
samairtimer
2025-08-27T09:52:29Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-27T09:49:21Z
--- base_model: google/gemma-3-270m-it library_name: transformers model_name: MyGemmaNPC tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for MyGemmaNPC This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="samairtimer/MyGemmaNPC", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
amsterdamNLP/Wav2Vec2-NL
amsterdamNLP
2025-08-27T09:50:03Z
10
1
null
[ "pytorch", "safetensors", "wav2vec2", "self-supervised", "pretraining", "speech", "audio", "nl", "arxiv:2506.00981", "license:openrail", "region:us" ]
null
2025-06-06T10:44:44Z
--- license: openrail language: - nl tags: - wav2vec2 - self-supervised - pretraining - speech - audio --- # Wav2Vec2-NL A Dutch Wav2Vec2-base model, pre-trained on 831 hours of exclusively Dutch speech. Pre-training data was extracted from a combination of: - the [Spoken Dutch Corpus](https://taalmaterialen.ivdnt.org/wp-content/uploads/documentatie/cgn_website/doc_English/topics/index.htm) (537 hours; incl. spontaneous conversations, interviews, read speech and news reports) - the Dutch component of [Multilingual LibriSpeech](https://www.openslr.org/94/) (211 hours; audiobook segments) - the Dutch subset of the [CommonVoice 16.1](https://huggingface.co/datasets/mozilla-foundation/common_voice_16_1) corpus (83 hours; read aloud speech) More information, incl. the training manifest and configuration is available in the [Wav2Vec2-NL repository on Zenodo](http://doi.org/10.5281/zenodo.15550628). Analyses of Dutch phonetic and lexical features encoded in Wav2Vec2-NL hidden states are reported in the paper [What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training](https://arxiv.org/abs/2506.00981) (Interspeech 2025; see full citation [below](#citation)). Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for an explanation of fine-tuning Wav2Vec2 models on HuggingFace. # Usage ```python from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2Model feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('amsterdamNLP/Wav2Vec2-NL') model = Wav2Vec2Model.from_pretrained('amsterdamNLP/Wav2Vec2-NL') ``` # Citation The _Wav2Vec2-NL_ model was published as part of: de Heer Kloots, M., Mohebbi, H., Pouw, C., Shen, G., Zuidema, W., Bentum, M. (2025). What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training. _Proc. INTERSPEECH 2025_. https://doi.org/10.48550/arXiv.2506.00981 BibTex entry: ```bibtex @inproceedings{deheerkloots25_interspeech, title = {What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training}, author = {Marianne {de Heer Kloots} and Hosein Mohebbi and Charlotte Pouw and Gaofei Shen and Willem Zuidema and Martijn Bentum}, year = {2025}, booktitle = {Interspeech 2025}, doi = {10.21437/Interspeech.2025-1526}, } ```
Sarath3321/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_stubby_robin
Sarath3321
2025-08-27T09:49:33Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am rugged_stubby_robin", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-27T09:13:45Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am rugged_stubby_robin --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AI4Bread/FarmXpert
AI4Bread
2025-08-27T09:49:18Z
0
0
null
[ "agriculture", "multimodal", "phenotyping", "visual-question-answering", "crop-detection", "disease-detection", "freshness-evaluation", "ripeness-assessment", "object-detection", "en", "license:apache-2.0", "region:us" ]
object-detection
2025-08-27T02:51:38Z
--- license: apache-2.0 language: [en] pretty_name: FarmXpert tags: - agriculture - multimodal - phenotyping - visual-question-answering - crop-detection - disease-detection - freshness-evaluation - ripeness-assessment - object-detection size_categories: [large] task_categories: - image-classification - object-detection - visual-question-answering - regression --- <p align="center" width="100%"> <img src="./assets/farmXpert_logo.png" width="90%"> </p> <div align=center> [![Static Badge](https://img.shields.io/badge/FarmXpert-F7C97E)](https://huggingface.co/AI4Bread/FarmXpert)[![Dataset](https://img.shields.io/badge/Dataset-Hugging_Face-CFAFD4)](https://huggingface.co/datasets/AI4Bread/FarmXpert-120K) </div> ## What is FarmXpert 👀 FarmXpert is an innovative multimodal agricultural model built upon MiniCPM-o, designed to address critical challenges in agricultural phenotyping. These challenges include: the scarcity of large-scale, cross-species datasets; the limited adaptability of existing agricultural Multimodal Large Language Models (MLLMs) to diverse tasks; and their inability to precisely focus on specific regions. To overcome these hurdles, FarmXpert integrates three major innovations: - FarmXpert-120K, a meticulously curated and comprehensive dataset comprising over 120,000 visual question-answering (VQA) entries. This dataset spans four distinct task types and covers more than 40 crop varieties. - A task-adaptive routing mechanism embedded within its language model. This mechanism dynamically manages a wide range of agricultural tasks while simultaneously minimizing memory consumption. - A mask-based, spatially aware feature extractor that employs a hybrid region representation. This enables precise analysis of user-defined, arbitrarily shaped regions. This innovative design positions FarmXpert to deliver superior performance compared to other models across tasks such as disease detection, ripeness assessment, freshness evaluation, and object detection. Furthermore, it offers users a more intuitive, comfortable, and flexible interactive experience. ## The the model framework of FarmXpert <div align=center> <img src="./assets/task_routing_mechanism.png" width="60%" > <br> The task routing mechanism in FarmXpert's LLM design <br> </div> <div align=center> <img src="./assets/feature_extractor.png" width="60%" > <br> The spatial-aware feature extractor in FarmXpert's LLM design <br> </div> ## Installation 1. Set up the environment Use conda to create a new virtual environment with Python 3.9.21. Run the following command: ``` conda create --name FarmXpert python=3.9.21 conda activate FarmXpert pip install -r requirements.txt ``` 2. Download all the checkpoints: - [FarmXpert-9B](https://huggingface.co/AI4Bread/FarmXpert) - [FarmXpert lora model ](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup/blob/main/open_clip_pytorch_model.bin) You could load the model with the path to the lora adapter. We advise you to use absolute path for your pretrained model. This is because LoRA only saves the adapter and the absolute path in the adapter configuration json file is used for finding out the pretrained model to load. ``` from PIL import Image from transformers import AutoModel, AutoTokenizer import torch from MOEmodel_1_spital.modeling_minicpmo import MiniCPMO from transformers import AutoProcessor import os from peft import PeftModel model = MiniCPMO.from_pretrained('./MOEmodel_1_spital', trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model.to("cuda:0") tokenizer = AutoTokenizer.from_pretrained('/home/wny/liuhongbo/minicpm_RL/MOEmodel_1_spital', trust_remote_code=True) from peft import PeftModel adapter_path="path_to_your_fine_tuned_checkpoint" model= PeftModel.from_pretrained(model,adapter_path,trust_remote_code=True) image = Image.open('your image path').convert('RGB') prompt = "your question" all_messages=[{ "role": "user", "content": [ image, prompt ] }] res = model.chat( msgs=all_messages, tokenizer=tokenizer, # processor=Processor ) print(res) ``` ### Checkpoints FarmXpert-9b base model🤗: [model](https://huggingface.co/AI4Bread/FarmXpert) We provide models trained on the FarmXpert-120K dataset; please check the [four task model](https://huggingface.co/AI4Bread/FarmXpert)【补充】. Additionally, we offer models further trained using the Ospery-724K dataset; please check the [spatial aware model](https://huggingface.co/AI4Bread/FarmXpert)【补充】. ## Examples <p align="center" width="100%"> <img src="./assets/examp_four_tasks.png" width="90%"> </p> Examples of FarmXpert's performance on four agricultural phenotyping tasks. <p align="center" width="100%"> <img src="./assets/examp_spatial.png" width="90%"> </p> Examples of FarmXpert's performance on free region awareness tasks. ## ## Acknowledgement 💌 - [MiniCPM-o](https://github.com/OpenBMB/MiniCPM-o): the modelbase we built upon. - [Ospery-724K](https://huggingface.co/datasets/AntGroup-MI/Osprey-724K): The data used for training the spatial-aware feature extractor.
vohuutridung/bartphow-sft
vohuutridung
2025-08-27T09:48:29Z
0
0
transformers
[ "transformers", "safetensors", "mbart", "text2text-generation", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-27T09:47:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/SpydazWeb_AI_CyberTron_Ultra_7b-GGUF
mradermacher
2025-08-27T09:47:24Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "code", "medical ", "farmer", "doctor", "Mega-Series", "Cyber-Series", "Role-Play", "Self-Rag", "ThinkingBot", "milestone", "mega-series", "SpydazWebAI", "en", "dataset:gretelai/synthetic_text_to_sql", "dataset:HuggingFaceTB/cosmopedia", "dataset:teknium/OpenHermes-2.5", "dataset:Open-Orca/SlimOrca", "dataset:Open-Orca/OpenOrca", "dataset:cognitivecomputations/dolphin-coder", "dataset:databricks/databricks-dolly-15k", "dataset:yahma/alpaca-cleaned", "dataset:uonlp/CulturaX", "dataset:mwitiderrick/SwahiliPlatypus", "dataset:swahili", "dataset:Rogendo/English-Swahili-Sentence-Pairs", "dataset:ise-uiuc/Magicoder-Evol-Instruct-110K", "dataset:meta-math/MetaMathQA", "base_model:LeroyDyer/SpydazWeb_AI_CyberTron_Ultra_7b", "base_model:quantized:LeroyDyer/SpydazWeb_AI_CyberTron_Ultra_7b", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-27T09:12:32Z
--- base_model: LeroyDyer/SpydazWeb_AI_CyberTron_Ultra_7b datasets: - gretelai/synthetic_text_to_sql - HuggingFaceTB/cosmopedia - teknium/OpenHermes-2.5 - Open-Orca/SlimOrca - Open-Orca/OpenOrca - cognitivecomputations/dolphin-coder - databricks/databricks-dolly-15k - yahma/alpaca-cleaned - uonlp/CulturaX - mwitiderrick/SwahiliPlatypus - swahili - Rogendo/English-Swahili-Sentence-Pairs - ise-uiuc/Magicoder-Evol-Instruct-110K - meta-math/MetaMathQA language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - mistral - trl - code - 'medical ' - farmer - doctor - Mega-Series - Cyber-Series - Role-Play - Self-Rag - ThinkingBot - milestone - mega-series - SpydazWebAI --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/LeroyDyer/SpydazWeb_AI_CyberTron_Ultra_7b <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SpydazWeb_AI_CyberTron_Ultra_7b-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_CyberTron_Ultra_7b-GGUF/resolve/main/SpydazWeb_AI_CyberTron_Ultra_7b.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_CyberTron_Ultra_7b-GGUF/resolve/main/SpydazWeb_AI_CyberTron_Ultra_7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_CyberTron_Ultra_7b-GGUF/resolve/main/SpydazWeb_AI_CyberTron_Ultra_7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_CyberTron_Ultra_7b-GGUF/resolve/main/SpydazWeb_AI_CyberTron_Ultra_7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_CyberTron_Ultra_7b-GGUF/resolve/main/SpydazWeb_AI_CyberTron_Ultra_7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_CyberTron_Ultra_7b-GGUF/resolve/main/SpydazWeb_AI_CyberTron_Ultra_7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_CyberTron_Ultra_7b-GGUF/resolve/main/SpydazWeb_AI_CyberTron_Ultra_7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_CyberTron_Ultra_7b-GGUF/resolve/main/SpydazWeb_AI_CyberTron_Ultra_7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_CyberTron_Ultra_7b-GGUF/resolve/main/SpydazWeb_AI_CyberTron_Ultra_7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_CyberTron_Ultra_7b-GGUF/resolve/main/SpydazWeb_AI_CyberTron_Ultra_7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_CyberTron_Ultra_7b-GGUF/resolve/main/SpydazWeb_AI_CyberTron_Ultra_7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_CyberTron_Ultra_7b-GGUF/resolve/main/SpydazWeb_AI_CyberTron_Ultra_7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-4-v2_6375
luckeciano
2025-08-27T09:46:42Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-27T05:51:44Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-4-v2_6375 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-4-v2_6375 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-4-v2_6375", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/zb6fw5v8) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.2 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
moyixiao/Qwen3-0.6B-GRPO-bf16
moyixiao
2025-08-27T09:44:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-27T09:44:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
esi777/blockassist-bc-camouflaged_trotting_eel_1756287708
esi777
2025-08-27T09:43:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "camouflaged trotting eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:42:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - camouflaged trotting eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1756286133
lisaozill03
2025-08-27T09:42:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rugged prickly alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:42:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rugged prickly alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dejiat/blockassist-bc-savage_unseen_bobcat_1756287611
Dejiat
2025-08-27T09:40:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:40:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Nindur/blockassist-bc-scruffy_bipedal_stork_1756287526
Nindur
2025-08-27T09:39:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy bipedal stork", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:39:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy bipedal stork --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
angiecely8538/blockassist-bc-striped_invisible_jackal_1756285481
angiecely8538
2025-08-27T09:37:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "striped invisible jackal", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:37:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - striped invisible jackal --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
esi777/blockassist-bc-camouflaged_trotting_eel_1756287298
esi777
2025-08-27T09:36:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "camouflaged trotting eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:35:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - camouflaged trotting eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
unsloth/GLM-4.5
unsloth
2025-08-27T09:35:20Z
41
2
transformers
[ "transformers", "safetensors", "glm4_moe", "text-generation", "conversational", "en", "zh", "base_model:zai-org/GLM-4.5", "base_model:finetune:zai-org/GLM-4.5", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-05T04:32:09Z
--- license: mit language: - en - zh pipeline_tag: text-generation library_name: transformers base_model: - zai-org/GLM-4.5 --- # GLM-4.5 <div align="center"> <img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/> </div> <p align="center"> 👋 Join our <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community. <br> 📖 Check out the GLM-4.5 <a href="https://z.ai/blog/glm-4.5" target="_blank">technical blog</a>. <br> 📍 Use GLM-4.5 API services on <a href="https://docs.z.ai/guides/llm/glm-4.5">Z.ai API Platform (Global)</a> or <br> <a href="https://docs.bigmodel.cn/cn/guide/models/text/glm-4.5">Zhipu AI Open Platform (Mainland China)</a>. <br> 👉 One click to <a href="https://chat.z.ai">GLM-4.5</a>. </p> ## Model Introduction The **GLM-4.5** series models are foundation models designed for intelligent agents. GLM-4.5 has **355** billion total parameters with **32** billion active parameters, while GLM-4.5-Air adopts a more compact design with **106** billion total parameters and **12** billion active parameters. GLM-4.5 models unify reasoning, coding, and intelligent agent capabilities to meet the complex demands of intelligent agent applications. Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that provide two modes: thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses. We have open-sourced the base models, hybrid reasoning models, and FP8 versions of the hybrid reasoning models for both GLM-4.5 and GLM-4.5-Air. They are released under the MIT open-source license and can be used commercially and for secondary development. As demonstrated in our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieves exceptional performance with a score of **63.2**, in the **3rd** place among all the proprietary and open-source models. Notably, GLM-4.5-Air delivers competitive results at **59.8** while maintaining superior efficiency. ![bench](https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/bench.png) For more eval results, show cases, and technical details, please visit our [technical blog](https://z.ai/blog/glm-4.5). The technical report will be released soon. The model code, tool parser and reasoning parser can be found in the implementation of [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4_moe), [vLLM](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/glm4_moe_mtp.py) and [SGLang](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/glm4_moe.py). ## Quick Start Please refer our [github page](https://github.com/zai-org/GLM-4.5) for more detail.
bah63843/blockassist-bc-plump_fast_antelope_1756287182
bah63843
2025-08-27T09:33:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:33:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sakamotoz/blockassist-bc-silent_shaggy_rabbit_1756285471
sakamotoz
2025-08-27T09:33:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "silent shaggy rabbit", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:33:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - silent shaggy rabbit --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jaeyong2/Recommandation-System-Preview
jaeyong2
2025-08-27T09:33:19Z
13
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T23:45:38Z
--- library_name: transformers language: - en license: apache-2.0 --- ## Pretrain-Recommandation <p align="center"> <img src="Pretrain-Recommandation.png" alt="client" width="400"/> </p> ### example ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "jaeyong2/Pretrain-Recommandation-Preview" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = """ You are a recommendation system AI. You input a list of items and a persona from the user. From the list, you recommend the item most appropriate for the persona, along with a reason why. (If no suitable item is found, you don't recommend it.) """.strip() content =""" Item list: [Men's all-in-one skincare set, Mineral sunscreen with UV protection, Deep moisturizing body oil, Retinol-based wrinkle cream] persona :A woman in her late 20s with sensitive skin who works in an office. She prefers natural ingredients and spends a lot of time outdoors. """.strip() system = {"role":"system", "content":prompt} user = {"role":"user", "content":content} messages = [system, user] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() content = tokenizer.decode(output_ids, skip_special_tokens=True).strip("\n") print(content) ``` ### result ``` selected_item : Mineral sunscreen with UV protection reason :Persona 2, a woman with sensitive skin and outdoor activities, would benefit from a mineral sunscreen with UV protection. It aligns with her preference for natural ingredients (mineral-based) and her need for protective UV rays, which are essential for outdoor workers. The product also matches Persona 1's likely focus on practical, non-toxic skincare products. ``` ## how to make dataset <p align="center"> <img src="data.png" alt="client" width="400"/> </p> ## License - Qwen/Qwen3-1.7B : https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE ## Acknowledgement This research is supported by **TPU Research Cloud program**.
VoilaRaj/81_f_u4lxHz
VoilaRaj
2025-08-27T09:32:20Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-27T09:31:40Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
esi777/blockassist-bc-camouflaged_trotting_eel_1756286984
esi777
2025-08-27T09:30:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "camouflaged trotting eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:30:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - camouflaged trotting eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
xinnn32/blockassist-bc-meek_winged_caterpillar_1756286921
xinnn32
2025-08-27T09:29:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:29:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
danieltcowleyh1/blockassist-bc-peaceful_darting_newt_1756284766
danieltcowleyh1
2025-08-27T09:26:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful darting newt", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:26:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful darting newt --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
rewwer/blockassist-bc-sleek_downy_termite_1756286053
rewwer
2025-08-27T09:26:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sleek downy termite", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:26:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sleek downy termite --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
liukevin666/blockassist-bc-yawning_striped_cassowary_1756286597
liukevin666
2025-08-27T09:24:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:24:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
WenFengg/swing27_14_31_17
WenFengg
2025-08-27T09:23:38Z
0
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-08-27T09:23:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
EmilRyd/gpt-oss-20b-aquarat-enggerm-gt-1000-90
EmilRyd
2025-08-27T09:23:33Z
0
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-27T09:21:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
adityapatil343/blockassist-bc-quick_wily_magpie_1756286558
adityapatil343
2025-08-27T09:23:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wily magpie", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:23:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wily magpie --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
WenFengg/swing27_14_31_16
WenFengg
2025-08-27T09:23:06Z
0
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-08-27T09:22:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
weruopper/blockassist-bc-powerful_fluffy_mongoose_1756286509
weruopper
2025-08-27T09:22:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "powerful fluffy mongoose", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:21:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - powerful fluffy mongoose --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
WenFengg/swing27_14_31_15
WenFengg
2025-08-27T09:21:38Z
0
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-08-27T09:21:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1756284898
rvipitkirubbe
2025-08-27T09:20:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mottled foraging ape", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:20:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mottled foraging ape --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mehdimerbah/Qwen2-1.5B-GRPO-standard-config
mehdimerbah
2025-08-27T09:19:42Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "grpo", "trl", "arxiv:2402.03300", "endpoints_compatible", "region:us" ]
null
2025-08-27T07:25:03Z
--- library_name: transformers model_name: Qwen2-1.5B-GRPO-standard-config tags: - generated_from_trainer - grpo - trl licence: license --- # Model Card for Qwen2-1.5B-GRPO-standard-config This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mehdimerbah/Qwen2-1.5B-GRPO-standard-config", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.9.0.dev20250813+cu128 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
lemonhat/Llama-3.1-8B-Instruct-t1_25k_v4_tag5
lemonhat
2025-08-27T09:16:55Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-27T09:05:39Z
--- library_name: transformers license: other base_model: meta-llama/Llama-3.1-8B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: t1_25k_v4_tag5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t1_25k_v4_tag5 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the t1_25k_v4_tag5 dataset. It achieves the following results on the evaluation set: - Loss: 0.3057 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.323 | 0.1447 | 100 | 0.3802 | | 0.309 | 0.2894 | 200 | 0.3551 | | 0.3108 | 0.4342 | 300 | 0.3314 | | 0.2638 | 0.5789 | 400 | 0.3239 | | 0.3014 | 0.7236 | 500 | 0.3136 | | 0.3501 | 0.8683 | 600 | 0.3067 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
Dejiat/blockassist-bc-savage_unseen_bobcat_1756286186
Dejiat
2025-08-27T09:16:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:16:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tammycra121/blockassist-bc-marine_rangy_eel_1756284729
tammycra121
2025-08-27T09:16:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "marine rangy eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:16:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - marine rangy eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Satram/QYA_150_Packing
Satram
2025-08-27T09:16:19Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-27T09:15:56Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Satram - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
0xHoodee/blockassist-bc-yawning_stubby_rhino_1756284521
0xHoodee
2025-08-27T09:15:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning stubby rhino", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:15:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning stubby rhino --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
memland/blockassist-bc-vocal_shrewd_skunk_1756284328
memland
2025-08-27T09:15:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vocal shrewd skunk", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:15:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vocal shrewd skunk --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ACECA/lowMvMax_122
ACECA
2025-08-27T09:14:22Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-12T15:11:15Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
BjarneNPO/BjarneNPO-27_08_2025_10_58_39
BjarneNPO
2025-08-27T09:14:22Z
0
0
sentence-transformers
[ "sentence-transformers", "tensorboard", "safetensors", "gte", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:86218", "loss:MultipleNegativesRankingLoss", "custom_code", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:Snowflake/snowflake-arctic-embed-m-v2.0", "base_model:finetune:Snowflake/snowflake-arctic-embed-m-v2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-08-27T09:10:00Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:86218 - loss:MultipleNegativesRankingLoss base_model: Snowflake/snowflake-arctic-embed-m-v2.0 widget: - source_sentence: Benutzer benötigt einen eigenen Zugang als Gemeinde. sentences: - Userin erklärt wie Urlaubsberechnung erfolgt. Nicht speichern sondern Urlaub berechnen anklicken. RT werden separat geführt - Benutzer muss sich einmal an denjenigen wenden, der mit dem Hauptzugang der Gemeinde arbeitet. - "Benutzer sendet uns einmal einen Screenshot der aktuellen Daten im KitaPlaner.\r\ \nAn Entwickler weitergegeben.\r\nLaut Entwickler sind die Daten alle im KitaPlaner\ \ angekommen. Die Archivkinder können noch einmal einzeln synchronisiert werden,\ \ wenn nötig." - source_sentence: Die Kollegin hat keinen Zugriff auf die Eltern-App, wer kann dies aktivieren? sentences: - Nachdem die Browserdaten der letzten 4 Wochen gelöscht wurden hat der Login wieder funktioniert - Gebeten sich dazu an den Träger zu wenden - Im Abgleich Belegung und BE konnte man erkennen, dass der entsprechenden Platzstruktur zwei Kinder zu viel zugeordnet waren. - source_sentence: "Hallo zusammen,\r\n \r\nwir haben im letzten Monat zusammen mit\ \ Digibox eine neue Funktion auf unserer Homepage eingestellt\r\nzur Buchung von\ \ Schulungen und Infoveranstaltungen. Mittlerweile ist dort alles weitestgehend\ \ getestet und funktioniert so wie gewünscht. Früher waren Termine per Überschrift\ \ getrennt, aber alle auf der gleichen Seite. Der Aufwand neue Termine einzustellen\r\ \n oder bestehende zu bearbeiten war nicht klein. Zusätzlich mussten neue Kategorien\ \ geschaffen werden für eine gesteigerte Nachfrage unserer Produkte. Von daher\ \ sollte diese Systematik umgestellt werden, sodass man erst zwei Dropdown-Menüs\ \ auswählen muss, damit\r\n Termine angezeigt werden. Durch das Fachteam wurden\ \ zusätzlich bei den Schulungen Dokumente zur Verfügung gestellt zu allen Formaten\ \ und Bundesländern, um für verbesserte Transparenz zu sorgen. Hier sind die beiden\ \ Seiten zu finden:\r\n \r\nFür Schulungen: \r\nhttps://kitaplus.de/services/schulungen\r\ \nFür Infoveranstaltungen: \r\nhttps://kitaplus.de/services/infoveranstaltungen\r\ \n \r\nDieses System erleichtert den Prozess im Hintergrund sehr stark bei der\ \ Erstellung von Terminen, und ist gleichzeitig für Interessenten deutlich übersichtlicher\ \ bei einer Vielzahl von verschiedenen Formaten. Die Formate werden nun erst\r\ \n angezeigt, nachdem man ein Bundesland im ersten Schritt ausgewählt hat. Danach\ \ kann man entweder eines oder alle Formate auswählen, unter denen Termine chronologisch\ \ sortiert auftauchen. Zuletzt kann man sich über das Kontaktformular zu einer\ \ Veranstaltung\r\n anmelden. \r\n \r\nDiese Mail dient als Info für alle, dass\ \ ihr Bescheid wisst, falls ihr das noch nicht mitbekommen habt. Falls jemand\ \ Feedback zu dieser Funktion haben sollte, könnt ihr euch gerne persönlich bei\ \ mir zurückmelden.\r\n\r\nIch wünsche allen eine angenehme Restwoche und genießt\ \ die Sonnenstrahlen!\r\n \r\nViele Grüße,\r\nWladimir" sentences: - N - N - laufend muss auf nein gesetzt werden, danach kann das Datum dort eingegeben werden - source_sentence: Userin fragt warum bei einem Bericht bei den Plätzen laut Betriebserlaubnis 15 steht und bei der Platzstruktur dann aber 17? sentences: - "1. Userin hatte noch keinen Personalbogen erstellt. Mit Userin den Personalbogen\ \ erstellt und freigegeben.\r\n2. Userin hatte schon das Beschäftigungsende eingegeben.\ \ Sie musste die Ausbildung und Funktion noch befristen." - Weil es genau so in den Einrichtungsstammdaten hinterlegt wurde. Ich kann bei der Platzstruktur ja was anderes hinterlegt haben als bei den Pl. laut Betriebserlaubnis - Es wurde für das kommende KGJ noch kein LB erstellt, daher kommt der Hinweis. - source_sentence: "Userin hinterlegt Email-Adresse im Benutzerkonto und speichert.\ \ Aber die Adresse wird trotz Bestätigung nicht gespeichert. \r\nEMA ist notwendig\ \ für 2FA\r\n\r\n Roesler = [email protected]" sentences: - Unter dem Namen der Dame gibt es nur einen Login. Vielleicht schaut sie mit dem Login einer anderen Kollegin auf die zweite Einrichtung? Oder sie hat einen Login als Träger? Dies klärt sie mit der Einrichtung ab. - Userin hat die Rolle "Mitarbeiter". - N pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m-v2.0 results: - task: type: information-retrieval name: Information Retrieval dataset: name: Snowflake/snowflake arctic embed m v2.0 type: Snowflake/snowflake-arctic-embed-m-v2.0 metrics: - type: cosine_accuracy@1 value: 0.33766233766233766 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.5584415584415584 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.6363636363636364 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.7142857142857143 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.33766233766233766 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.29437229437229434 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.2597402597402597 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.17792207792207793 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.03770731433069095 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.09817282284814752 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.13292666929030567 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.1775285320739866 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.2380482750072461 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.46743970315398886 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.16950701037187113 name: Cosine Map@100 --- # SentenceTransformer based on Snowflake/snowflake-arctic-embed-m-v2.0 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Snowflake/snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0) <!-- at revision 95c2741480856aa9666782eb4afe11959938017f --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'GteModel'}) (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("BjarneNPO-27_08_2025_10_58_39") # Run inference queries = [ "Userin hinterlegt Email-Adresse im Benutzerkonto und speichert. Aber die Adresse wird trotz Best\u00e4tigung nicht gespeichert. \r\nEMA ist notwendig f\u00fcr 2FA\r\n\r\n Roesler = [email protected]", ] documents = [ 'N', 'Unter dem Namen der Dame gibt es nur einen Login. Vielleicht schaut sie mit dem Login einer anderen Kollegin auf die zweite Einrichtung? Oder sie hat einen Login als Träger? Dies klärt sie mit der Einrichtung ab.', 'Userin hat die Rolle "Mitarbeiter".', ] query_embeddings = model.encode_query(queries) document_embeddings = model.encode_document(documents) print(query_embeddings.shape, document_embeddings.shape) # [1, 768] [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(query_embeddings, document_embeddings) print(similarities) # tensor([[0.3811, 0.1504, 0.1198]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `Snowflake/snowflake-arctic-embed-m-v2.0` * Evaluated with <code>scripts.InformationRetrievalEvaluatorCustom.InformationRetrievalEvaluatorCustom</code> with these parameters: ```json { "query_prompt_name": "query", "corpus_prompt_name": "document" } ``` | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.3377 | | cosine_accuracy@3 | 0.5584 | | cosine_accuracy@5 | 0.6364 | | cosine_accuracy@10 | 0.7143 | | cosine_precision@1 | 0.3377 | | cosine_precision@3 | 0.2944 | | cosine_precision@5 | 0.2597 | | cosine_precision@10 | 0.1779 | | cosine_recall@1 | 0.0377 | | cosine_recall@3 | 0.0982 | | cosine_recall@5 | 0.1329 | | cosine_recall@10 | 0.1775 | | **cosine_ndcg@10** | **0.238** | | cosine_mrr@10 | 0.4674 | | cosine_map@100 | 0.1695 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 86,218 training samples * Columns: <code>query</code> and <code>answer</code> * Approximate statistics based on the first 1000 samples: | | query | answer | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 80.67 tokens</li><li>max: 5231 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 25.16 tokens</li><li>max: 238 tokens</li></ul> | * Samples: | query | answer | |:---------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Nun ist die Monatsmeldung erfolgt, aber rote Ausrufezeichen tauchen auf.</code> | <code>Userin an das JA verwiesen, diese müssten ihr die Schloss-Monate zur Überarbeitung im Kibiz.web zurückgeben. Userin dazu empfohlen, die Kinder die nicht in kitaplus sind, aber in Kibiz.web - im KiBiz.web zu entfernen, wenn diese nicht vorhanden sind.</code> | | <code>Die Feiertage in den Stammdaten stimmen nicht.</code> | <code>Es besteht bereits ein Ticket dafür.</code> | | <code>Abrechnung kann nicht final freigegeben werden, es wird aber keiner Fehlermeldung angeziegt</code> | <code>im Hintergrund ist eine Fehlermeldung zu sehen. An Entwickler weitergeleitet. <br>Korrektur vorgenommen.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `gradient_accumulation_steps`: 4 - `learning_rate`: 4e-05 - `weight_decay`: 0.01 - `warmup_ratio`: 0.08 - `bf16`: True - `tf32`: True - `load_best_model_at_end`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 4 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 4e-05 - `weight_decay`: 0.01 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.08 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: True - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | Snowflake/snowflake-arctic-embed-m-v2.0_cosine_ndcg@10 | |:-------:|:-------:|:-------------:|:------------------------------------------------------:| | 0.0297 | 10 | 2.8914 | - | | 0.0593 | 20 | 2.8359 | - | | 0.0890 | 30 | 2.4573 | - | | 0.1187 | 40 | 2.3298 | - | | 0.1484 | 50 | 2.215 | - | | 0.1780 | 60 | 2.0003 | - | | 0.2077 | 70 | 1.8714 | - | | 0.2374 | 80 | 1.7492 | - | | 0.2671 | 90 | 1.6268 | - | | 0.2967 | 100 | 1.6434 | - | | 0.3264 | 110 | 1.5872 | - | | 0.3561 | 120 | 1.5221 | - | | 0.3858 | 130 | 1.4166 | - | | 0.4154 | 140 | 1.4093 | - | | 0.4451 | 150 | 1.4323 | - | | 0.4748 | 160 | 1.3748 | - | | 0.5045 | 170 | 1.3443 | - | | 0.5341 | 180 | 1.3358 | - | | 0.5638 | 190 | 1.3118 | - | | 0.5935 | 200 | 1.2791 | - | | 0.6231 | 210 | 1.2576 | - | | 0.6528 | 220 | 1.2493 | - | | 0.6825 | 230 | 1.2586 | - | | 0.7122 | 240 | 1.2468 | - | | 0.7418 | 250 | 1.2017 | - | | 0.7715 | 260 | 1.177 | - | | 0.8012 | 270 | 1.1899 | - | | 0.8309 | 280 | 1.161 | - | | 0.8605 | 290 | 1.1743 | - | | 0.8902 | 300 | 1.1568 | - | | 0.9199 | 310 | 1.1422 | - | | 0.9496 | 320 | 0.0 | - | | 0.9792 | 330 | 0.0 | - | | **1.0** | **337** | **-** | **0.2375** | | 1.0089 | 340 | 0.331 | - | | 1.0386 | 350 | 0.9826 | - | | 1.0682 | 360 | 0.9872 | - | | 1.0979 | 370 | 0.9697 | - | | 1.1276 | 380 | 0.9763 | - | | 1.1573 | 390 | 1.0233 | - | | 1.1869 | 400 | 0.9827 | - | | 1.2166 | 410 | 0.9754 | - | | 1.2463 | 420 | 0.986 | - | | 1.2760 | 430 | 0.9342 | - | | 1.3056 | 440 | 0.9685 | - | | 1.3353 | 450 | 0.9699 | - | | 1.3650 | 460 | 0.906 | - | | 1.3947 | 470 | 0.9959 | - | | 1.4243 | 480 | 0.9386 | - | | 1.4540 | 490 | 0.9565 | - | | 1.4837 | 500 | 0.9308 | - | | 1.5134 | 510 | 0.9325 | - | | 1.5430 | 520 | 0.9232 | - | | 1.5727 | 530 | 0.9413 | - | | 1.6024 | 540 | 0.9183 | - | | 1.6320 | 550 | 0.9651 | - | | 1.6617 | 560 | 0.9034 | - | | 1.6914 | 570 | 0.8517 | - | | 1.7211 | 580 | 0.923 | - | | 1.7507 | 590 | 0.8351 | - | | 1.7804 | 600 | 0.858 | - | | 1.8101 | 610 | 0.8404 | - | | 1.8398 | 620 | 0.9191 | - | | 1.8694 | 630 | 0.8746 | - | | 1.8991 | 640 | 0.8732 | - | | 1.9288 | 650 | 0.5662 | - | | 1.9585 | 660 | 0.0 | - | | 1.9881 | 670 | 0.0 | - | | 2.0 | 674 | - | 0.2252 | | 2.0178 | 680 | 0.4717 | - | | 2.0475 | 690 | 0.7903 | - | | 2.0772 | 700 | 0.7363 | - | | 2.1068 | 710 | 0.7626 | - | | 2.1365 | 720 | 0.7836 | - | | 2.1662 | 730 | 0.7634 | - | | 2.1958 | 740 | 0.7843 | - | | 2.2255 | 750 | 0.8229 | - | | 2.2552 | 760 | 0.7876 | - | | 2.2849 | 770 | 0.7467 | - | | 2.3145 | 780 | 0.7461 | - | | 2.3442 | 790 | 0.7687 | - | | 2.3739 | 800 | 0.7353 | - | | 2.4036 | 810 | 0.7721 | - | | 2.4332 | 820 | 0.7392 | - | | 2.4629 | 830 | 0.7698 | - | | 2.4926 | 840 | 0.7876 | - | | 2.5223 | 850 | 0.7493 | - | | 2.5519 | 860 | 0.7775 | - | | 2.5816 | 870 | 0.717 | - | | 2.6113 | 880 | 0.6827 | - | | 2.6409 | 890 | 0.7727 | - | | 2.6706 | 900 | 0.7433 | - | | 2.7003 | 910 | 0.725 | - | | 2.7300 | 920 | 0.7344 | - | | 2.7596 | 930 | 0.7822 | - | | 2.7893 | 940 | 0.7131 | - | | 2.8190 | 950 | 0.7894 | - | | 2.8487 | 960 | 0.7286 | - | | 2.8783 | 970 | 0.7635 | - | | 2.9080 | 980 | 0.7814 | - | | 2.9377 | 990 | 0.2642 | - | | 2.9674 | 1000 | 0.0 | - | | 2.9970 | 1010 | 0.0 | - | | 3.0 | 1011 | - | 0.2380 | * The bold row denotes the saved checkpoint. </details> ### Framework Versions - Python: 3.10.11 - Sentence Transformers: 5.1.0 - Transformers: 4.55.2 - PyTorch: 2.8.0+cu129 - Accelerate: 1.10.0 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
alok0777/blockassist-bc-masked_pensive_lemur_1756285946
alok0777
2025-08-27T09:13:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "masked pensive lemur", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:13:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - masked pensive lemur --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
liukevin666/blockassist-bc-yawning_striped_cassowary_1756285933
liukevin666
2025-08-27T09:13:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-27T09:13:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).