modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
anarasgarli/blockassist-bc-fast_howling_cockroach_1756067699
|
anarasgarli
| 2025-08-24T20:35:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fast howling cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T20:35:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fast howling cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kimweb3/blockassist-bc-camouflaged_sedate_pheasant_1756067701
|
kimweb3
| 2025-08-24T20:35:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged sedate pheasant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T20:35:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged sedate pheasant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sdfsdgsgdf/blockassist-bc-barky_snorting_dingo_1756067155
|
sdfsdgsgdf
| 2025-08-24T20:35:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"barky snorting dingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T20:35:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- barky snorting dingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1756067400
|
kapalbalap
| 2025-08-24T20:30:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T20:30:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kavpro/blockassist-bc-tall_lively_caribou_1756067360
|
kavpro
| 2025-08-24T20:30:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall lively caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T20:30:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall lively caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kimweb3/blockassist-bc-camouflaged_sedate_pheasant_1756067395
|
kimweb3
| 2025-08-24T20:30:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged sedate pheasant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T20:30:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged sedate pheasant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1756067206
|
0xaoyama
| 2025-08-24T20:27:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T20:27:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1756065372
|
sampingkaca72
| 2025-08-24T20:25:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T20:25:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
adity12345/Roberta_coaid
|
adity12345
| 2025-08-24T20:23:49Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:adity12345/Roberta_covert",
"base_model:finetune:adity12345/Roberta_covert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-24T20:23:39Z |
---
library_name: transformers
license: mit
base_model: adity12345/Roberta_covert
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: Roberta_coaid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Roberta_coaid
This model is a fine-tuned version of [adity12345/Roberta_covert](https://huggingface.co/adity12345/Roberta_covert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2850
- Accuracy: 0.897
- Auc: 0.862
- Precision: 0.905
- Recall: 0.984
- F1: 0.943
- F1-macro: 0.721
- F1-micro: 0.897
- F1-weighted: 0.881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc | Precision | Recall | F1 | F1-macro | F1-micro | F1-weighted |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-----:|:---------:|:------:|:-----:|:--------:|:--------:|:-----------:|
| 1.1024 | 1.0638 | 50 | 0.2850 | 0.897 | 0.862 | 0.905 | 0.984 | 0.943 | 0.721 | 0.897 | 0.881 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1756066265
|
IvanJAjebu
| 2025-08-24T20:12:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T20:12:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1756065997
|
0xaoyama
| 2025-08-24T20:07:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T20:07:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
asteroid999/blockassist-bc-furry_smooth_caterpillar_1756065649
|
asteroid999
| 2025-08-24T20:01:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"furry smooth caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T20:01:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- furry smooth caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
marcuscedricridia/arc-Q4_K_M-GGUF
|
marcuscedricridia
| 2025-08-24T20:01:14Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:NewstaR/arc",
"base_model:quantized:NewstaR/arc",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-24T20:01:08Z |
---
base_model: NewstaR/arc
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# marcuscedricridia/arc-Q4_K_M-GGUF
This model was converted to GGUF format from [`NewstaR/arc`](https://huggingface.co/NewstaR/arc) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NewstaR/arc) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo marcuscedricridia/arc-Q4_K_M-GGUF --hf-file arc-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo marcuscedricridia/arc-Q4_K_M-GGUF --hf-file arc-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo marcuscedricridia/arc-Q4_K_M-GGUF --hf-file arc-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo marcuscedricridia/arc-Q4_K_M-GGUF --hf-file arc-q4_k_m.gguf -c 2048
```
|
VIDEOS-afrin-viral-video-Orginal-link-xk/New.full.videos.afrin.apu.Viral.Video.Official.Tutorial
|
VIDEOS-afrin-viral-video-Orginal-link-xk
| 2025-08-24T19:58:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-24T19:56:08Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1756065446
|
kapalbalap
| 2025-08-24T19:58:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:58:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1756065454
|
Vasya777
| 2025-08-24T19:58:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:58:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/sign-language-20250823-190451-GGUF
|
mradermacher
| 2025-08-24T19:53:35Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"en",
"dataset:devparagiri/dataset-sign-language-20250823-190451",
"base_model:devparagiri/sign-language-20250823-190451",
"base_model:quantized:devparagiri/sign-language-20250823-190451",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-24T18:55:46Z |
---
base_model: devparagiri/sign-language-20250823-190451
datasets:
- devparagiri/dataset-sign-language-20250823-190451
language:
- en
library_name: transformers
license: other
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/devparagiri/sign-language-20250823-190451
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#sign-language-20250823-190451-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/sign-language-20250823-190451-GGUF/resolve/main/sign-language-20250823-190451.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/sign-language-20250823-190451-GGUF/resolve/main/sign-language-20250823-190451.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/sign-language-20250823-190451-GGUF/resolve/main/sign-language-20250823-190451.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/sign-language-20250823-190451-GGUF/resolve/main/sign-language-20250823-190451.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/sign-language-20250823-190451-GGUF/resolve/main/sign-language-20250823-190451.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/sign-language-20250823-190451-GGUF/resolve/main/sign-language-20250823-190451.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/sign-language-20250823-190451-GGUF/resolve/main/sign-language-20250823-190451.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/sign-language-20250823-190451-GGUF/resolve/main/sign-language-20250823-190451.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/sign-language-20250823-190451-GGUF/resolve/main/sign-language-20250823-190451.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/sign-language-20250823-190451-GGUF/resolve/main/sign-language-20250823-190451.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/sign-language-20250823-190451-GGUF/resolve/main/sign-language-20250823-190451.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/sign-language-20250823-190451-GGUF/resolve/main/sign-language-20250823-190451.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1756065116
|
kapalbalap
| 2025-08-24T19:52:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:52:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756064958
|
ggozzy
| 2025-08-24T19:50:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:50:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1756063170
|
katanyasekolah
| 2025-08-24T19:49:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:48:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mohda/blockassist-bc-regal_fierce_hummingbird_1756064378
|
mohda
| 2025-08-24T19:40:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal fierce hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:40:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal fierce hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1756062664
|
coelacanthxyz
| 2025-08-24T19:38:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:38:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
adity12345/Roberta_covidFact
|
adity12345
| 2025-08-24T19:36:38Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-24T19:36:21Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: Roberta_covidFact
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Roberta_covidFact
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6179
- Accuracy: 0.694
- Auc: 0.498
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- F1-macro: 0.41
- F1-micro: 0.694
- F1-weighted: 0.569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc | Precision | Recall | F1 | F1-macro | F1-micro | F1-weighted |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-----:|:---------:|:------:|:---:|:--------:|:--------:|:-----------:|
| 0.6882 | 0.5587 | 50 | 0.6186 | 0.694 | 0.504 | 0.0 | 0.0 | 0.0 | 0.41 | 0.694 | 0.569 |
| 0.633 | 1.1117 | 100 | 0.6167 | 0.694 | 0.524 | 0.0 | 0.0 | 0.0 | 0.41 | 0.694 | 0.569 |
| 0.6282 | 1.6704 | 150 | 0.6179 | 0.694 | 0.498 | 0.0 | 0.0 | 0.0 | 0.41 | 0.694 | 0.569 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
koloni/blockassist-bc-deadly_graceful_stingray_1756062601
|
koloni
| 2025-08-24T19:35:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:35:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1756063684
|
Vasya777
| 2025-08-24T19:32:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:28:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756063763
|
ggozzy
| 2025-08-24T19:30:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:30:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1756063747
|
kapalbalap
| 2025-08-24T19:30:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:29:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
maxibillion1975/blockassist-bc-iridescent_squeaky_sandpiper_1756061935
|
maxibillion1975
| 2025-08-24T19:23:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent squeaky sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:23:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent squeaky sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756063045
|
ggozzy
| 2025-08-24T19:18:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:18:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1756063014
|
kapalbalap
| 2025-08-24T19:17:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:17:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zenqqq/blockassist-bc-restless_reptilian_caterpillar_1756062906
|
zenqqq
| 2025-08-24T19:16:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless reptilian caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:16:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- restless reptilian caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Delnith/Sugoi-14B-Ultra-HF-gptqmodel-8bit
|
Delnith
| 2025-08-24T19:14:54Z | 0 | 1 | null |
[
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"ja",
"dataset:lmg-anon/VNTL-v3.1-1k",
"base_model:sugoitoolkit/Sugoi-14B-Ultra-HF",
"base_model:quantized:sugoitoolkit/Sugoi-14B-Ultra-HF",
"license:apache-2.0",
"8-bit",
"gptq",
"region:us"
] |
text-generation
| 2025-08-24T19:06:49Z |
---
license: apache-2.0
datasets:
- lmg-anon/VNTL-v3.1-1k
language:
- en
- ja
base_model:
- sugoitoolkit/Sugoi-14B-Ultra-HF
pipeline_tag: text-generation
---
# Sugoi LLM 14B Ultra (HF version)
This is an 8-bit version of Sugoi 14B Ultra, quantized using GPTQmodel and the VNTL-v3.1-1k dataset. This quant should work better than GGUF for certain backends like vLLM and aphrodite-engine, which excel at asynchronous prompting.
Unleashing the full potential of the previous sugoi 14B model, **Sugoi 14B Ultra** delivers near-double translation accuracy compared to its quantized predecessor—achieving a BLEU score of **21.38 vs 13.67**. Its prompt-following skills rival those of Qwen 2.5 Base, especially when handling the bracket-heavy text commonly found in RPG Maker projects.
---
## Model Overview
- **Key Improvements**
* Nearly 2× BLEU score boost over previous quantized version (21.38 vs 13.67).
* Stronger prompt adherence, especially with RPGM-style bracketed text.
- **Ideal Use Cases**
* Japanese → English translation—especially for game dialogue or RPG text.
* Interactive environments—works well with chat UIs like LM Studio.
---
## System Prompt & Settings
Must include a system prompt for best performance:
> You are a professional localizer whose primary goal is to translate Japanese to English. You should use colloquial or slang or nsfw vocabulary if it makes the translation more accurate. Always respond in English.
Additional recommendations:
- Context length: ~10 lines (too much may degrade quality).
- In LM Studio, you can interactively ask grammar or context questions, or switch target language via the prompt (quality may vary).
---
## Experimental Features
These features are experimental and may need tuning:
1. **Tool Integration & JSON Output**
2. **RPGM Tag Preservation**
---
## Recommended Sampling Parameters
| Parameter | Value |
|-----------------|--------|
| Temperature | 0.1 |
| Top-K | 40 |
| Top-P | 0.95 |
| Min-P | 0.05 |
| Repeat Penalty | 1.1 |
---
## Evaluation & Comparison
- **Quantitative**: BLEU score doubled vs prior version (21.38 vs 13.67).
- **Qualitative**: Effective with prompt complexity and RPG Maker markup—delivers clean and accurate translations.
---
## Limitations & Usage Notes
- Overly long context may **“poison”** the output—keep it around 10 lines for best results.
- Experimental features like JSON formatting and tag preservation may not always work perfectly—review outputs carefully.
- Performance may vary depending on the prompt complexity and UI/tool environment.
- Only uncensored for translation task with translation system prompt, other use case such as roleplay,chat may still trigger qwen censoring.
---
## Getting the Model
Available via Files and Versions tab above.
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756062806
|
ggozzy
| 2025-08-24T19:14:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:14:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1756062516
|
Vasya777
| 2025-08-24T19:14:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:09:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tokenizers-chat-templates-only/Mistral-Nemo-Instruct-2407
|
tokenizers-chat-templates-only
| 2025-08-24T19:12:27Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-24T19:12:05Z |
---
license: apache-2.0
---
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1756062677
|
kapalbalap
| 2025-08-24T19:12:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:12:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Khaljr/blockassist-bc-bellowing_squinting_finch_1756062688
|
Khaljr
| 2025-08-24T19:11:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing squinting finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:11:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing squinting finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nightmedia/Kimi-VL-A3B-Thinking-2506-q6-hi-mlx
|
nightmedia
| 2025-08-24T19:09:20Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"kimi_vl",
"text-generation",
"conversational",
"custom_code",
"base_model:moonshotai/Kimi-VL-A3B-Thinking-2506",
"base_model:quantized:moonshotai/Kimi-VL-A3B-Thinking-2506",
"license:mit",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-24T14:02:41Z |
---
base_model: moonshotai/Kimi-VL-A3B-Thinking-2506
license: mit
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# Kimi-VL-A3B-Thinking-2506-q6-hi-mlx
This model [Kimi-VL-A3B-Thinking-2506-q6-hi-mlx](https://huggingface.co/Kimi-VL-A3B-Thinking-2506-q6-hi-mlx) was
converted to MLX format from [moonshotai/Kimi-VL-A3B-Thinking-2506](https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking-2506)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Kimi-VL-A3B-Thinking-2506-q6-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
asdfsdfsdf545/blockassist-bc-restless_poisonous_orangutan_1756061580
|
asdfsdfsdf545
| 2025-08-24T19:04:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless poisonous orangutan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T19:04:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- restless poisonous orangutan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1756061928
|
Vasya777
| 2025-08-24T18:59:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T18:59:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756061371
|
ggozzy
| 2025-08-24T18:50:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T18:50:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756061132
|
ggozzy
| 2025-08-24T18:46:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T18:46:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ALEXEY1ko/blockassist-bc-knobby_arctic_viper_1756061062
|
ALEXEY1ko
| 2025-08-24T18:45:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"knobby arctic viper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T18:44:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- knobby arctic viper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Mano-Official-Viral-Video-Clip/New.full.videos.Mano.Viral.Video.Official.Tutorial
|
Mano-Official-Viral-Video-Clip
| 2025-08-24T18:32:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-24T18:31:55Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/3ckkv2u7?Viral-Video-Original-Link" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
adgafhsdfhdf/blockassist-bc-furry_strong_duck_1756059639
|
adgafhsdfhdf
| 2025-08-24T18:30:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"furry strong duck",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T18:30:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- furry strong duck
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dassem/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-endangered_gregarious_wolf
|
Dassem
| 2025-08-24T18:28:26Z | 102 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am endangered gregarious wolf",
"unsloth",
"trl",
"genrl-swarm",
"I am endangered_gregarious_wolf",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Gensyn/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-05-03T10:52:54Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-endangered_gregarious_wolf
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am endangered gregarious wolf
- unsloth
- trl
- genrl-swarm
- I am endangered_gregarious_wolf
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-endangered_gregarious_wolf
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Dassem/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-endangered_gregarious_wolf", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
indoempatnol/blockassist-bc-fishy_wary_swan_1756058415
|
indoempatnol
| 2025-08-24T18:27:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T18:27:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1756059886
|
kapalbalap
| 2025-08-24T18:25:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T18:25:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1756059715
|
kapalbalap
| 2025-08-24T18:22:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T18:22:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1756059285
|
kapalbalap
| 2025-08-24T18:15:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T18:15:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Orginal-haider-shah-videos-viral-35-second/LINK.haider.shah.Viral.Video.Official.Tutorial
|
Orginal-haider-shah-videos-viral-35-second
| 2025-08-24T18:06:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-24T18:06:01Z |
<animated-image data-catalyst=""><a href="https://newmovietv.online/leaked-video/?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
lowelldiaz/blockassist-bc-prowling_feathered_stork_1756056882
|
lowelldiaz
| 2025-08-24T17:37:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prowling feathered stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T17:37:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prowling feathered stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
afsagag/t5-song-feature-generator
|
afsagag
| 2025-08-24T17:36:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-24T17:36:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1756056733
|
kapalbalap
| 2025-08-24T17:33:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T17:33:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756055549
|
Sayemahsjn
| 2025-08-24T17:31:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T17:31:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1756055031
|
koloni
| 2025-08-24T17:29:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T17:29:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gabrieln2h/Qwen3-0.6B-Gensyn-Swarm-hibernating_dextrous_chimpanzee
|
gabrieln2h
| 2025-08-24T17:24:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am hibernating_dextrous_chimpanzee",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-24T07:15:51Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am hibernating_dextrous_chimpanzee
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1756054501
|
coelacanthxyz
| 2025-08-24T17:23:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T17:23:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756056096
|
liukevin666
| 2025-08-24T17:22:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T17:22:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
unitova/blockassist-bc-zealous_sneaky_raven_1756054524
|
unitova
| 2025-08-24T17:22:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T17:22:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
g-assismoraes/Qwen3-4B-Base-fpi-alpha1.6-var-assin2
|
g-assismoraes
| 2025-08-24T17:20:10Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T01:13:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1756054345
|
calegpedia
| 2025-08-24T17:18:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T17:18:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thyYu2024/qwen2-7b-instruct-trl-sft-newnew
|
thyYu2024
| 2025-08-24T17:16:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-24T08:56:52Z |
---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
model_name: qwen2-7b-instruct-trl-sft-newnew
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-7b-instruct-trl-sft-newnew
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="thyYu2024/qwen2-7b-instruct-trl-sft-newnew", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ruoxue2-stony-brook-university/qwen2vl-sft-mydataset/runs/zqe1i5ho)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.55.2
- Pytorch: 2.4.1+cu121
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
g-assismoraes/Qwen3-4B-Base-fpi-alpha1.6-var-imdb
|
g-assismoraes
| 2025-08-24T17:16:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-24T17:13:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MomlessTomato/hanamaru-kunikida
|
MomlessTomato
| 2025-08-24T17:11:39Z | 25 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:cagliostrolab/animagine-xl-3.0",
"base_model:adapter:cagliostrolab/animagine-xl-3.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-09-02T03:32:11Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
high quality, defined pupil, looking at viewer, rounded pupil, defined iris,
(soft iris:1.2), torso shadow, long hair, bangs, mole, hairclip,
parameters:
negative_prompt: >-
bad_anatomy, deformation, amputation, deformity, deformed_nipples,
duplicated_torso, deformed_torso, long_torso, large_torso,
unproportioned_torso, (deformed_pussy:1.2), (deformed_hands:1.2),
unproportioned_eyes, unproportioned_head, small_head, duplicated_nose,
big_nose, fusioned_clothes, fusioned_arms, undefined_limbs, divided_pussy,
red_pussy, duplicated_pussy, deformed_anus, deformed_pussy,
output:
url: images/3.png
base_model: Linaqruf/animagine-xl-3.0
instance_prompt: id_hanamaru_kunikida
license: mit
---
# Hanamaru Kunikida
<Gallery />
## Model description
This model was trained to generate high quality images based on SIFAS cards.
To achieve better quality, you should be using hako-mikan's regional prompter, along with Latent Mode, which modifies the way Stable Diffusion isolates the LoRA resulting in a significant improvement.
## Trigger words
You should use `id_hanamaru_kunikida` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/theidoldaily/hanamru-kunikida/tree/main) them in the Files & versions tab.
|
rcoitamtrangia2/blockassist-bc-lanky_powerful_goat_1756054674
|
rcoitamtrangia2
| 2025-08-24T17:06:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lanky powerful goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T17:06:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lanky powerful goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
motza0025/blockassist-bc-scampering_scaly_salmon_1756053630
|
motza0025
| 2025-08-24T17:05:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scampering scaly salmon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T17:05:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scampering scaly salmon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yadav908ankit/blockassist-bc-deft_wily_armadillo_1756054857
|
yadav908ankit
| 2025-08-24T17:02:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deft wily armadillo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T17:01:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deft wily armadillo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
uppal-farm-girl-viral-video-link/New.full.videos.uppal.farm.girl.Viral.Video.Official.Tutorial
|
uppal-farm-girl-viral-video-link
| 2025-08-24T17:00:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-24T17:00:31Z |
<a href="https://tinyurl.com/huggingtv" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
canhtrangz2539a/blockassist-bc-fluffy_dormant_tapir_1756054088
|
canhtrangz2539a
| 2025-08-24T16:57:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fluffy dormant tapir",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T16:57:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fluffy dormant tapir
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gasoline2255/blockassist-bc-flightless_sizable_wildebeest_1756054377
|
gasoline2255
| 2025-08-24T16:55:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flightless sizable wildebeest",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T16:55:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flightless sizable wildebeest
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756054197
|
ggozzy
| 2025-08-24T16:51:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T16:50:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1756054185
|
kapalbalap
| 2025-08-24T16:50:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T16:50:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
creedpwn3/blockassist-bc-foraging_running_cobra_1756050595
|
creedpwn3
| 2025-08-24T16:50:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"foraging running cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T16:49:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- foraging running cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
elwandabayleighwu160/blockassist-bc-running_lively_snake_1756053468
|
elwandabayleighwu160
| 2025-08-24T16:47:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"running lively snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T16:47:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- running lively snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756053441
|
liukevin666
| 2025-08-24T16:39:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T16:38:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
syuvers/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-padded_dextrous_ox
|
syuvers
| 2025-08-24T16:37:54Z | 129 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am padded_dextrous_ox",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T14:06:42Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am padded_dextrous_ox
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lajuanaisadorayd072/blockassist-bc-zealous_webbed_butterfly_1756052835
|
lajuanaisadorayd072
| 2025-08-24T16:37:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous webbed butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T16:37:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous webbed butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
syuvers/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-restless_exotic_badger
|
syuvers
| 2025-08-24T16:35:17Z | 135 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am restless_exotic_badger",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T14:03:31Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am restless_exotic_badger
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1756053159
|
kapalbalap
| 2025-08-24T16:33:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T16:33:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eshanroy5678/blockassist-bc-untamed_dextrous_dingo_1756052581
|
eshanroy5678
| 2025-08-24T16:32:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed dextrous dingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T16:27:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed dextrous dingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
XLON350/blockassist-bc-lithe_slimy_bison_1756052914
|
XLON350
| 2025-08-24T16:29:58Z | 0 | 1 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lithe slimy bison",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T16:29:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lithe slimy bison
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
antipovan/blockassist-bc-moist_bipedal_cheetah_1756050651
|
antipovan
| 2025-08-24T16:25:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"moist bipedal cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T16:25:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- moist bipedal cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ale902/ppo-lunar_lander
|
Ale902
| 2025-08-24T16:25:04Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-24T16:24:57Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -150.35 +/- 77.90
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Ale902/ppo-lunar_lander'
'batch_size': 512
'minibatch_size': 128}
```
|
TRENDING-Link-Full-Hadeer-Abdel-Razek-ver/Link.Full.Video.adeer.Abdelrazik.Video.2025.Clips.Full.Video.Hadeer.Abdelrazik.telegram
|
TRENDING-Link-Full-Hadeer-Abdel-Razek-ver
| 2025-08-24T16:21:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-24T16:21:41Z |
<a rel="nofollow" href="https://viralflix.xyz/leaked/?aa">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/">🌐 Viral Video Original Full HD🟢==►► WATCH NOW</a>
|
abdel-razek-ver-20/original-videos-link-clip-terabox-full-new-clips-latest-full
|
abdel-razek-ver-20
| 2025-08-24T16:16:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-24T16:16:09Z |
<a rel="nofollow" href="https://viralflix.xyz/leaked/?aa">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/">🌐 Viral Video Original Full HD🟢==►► WATCH NOW</a>
|
kayacrypto/blockassist-bc-thriving_barky_wolf_1756052059
|
kayacrypto
| 2025-08-24T16:16:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving barky wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T16:16:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving barky wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
banti07908/blockassist-bc-skilled_mighty_monkey_1756050338
|
banti07908
| 2025-08-24T16:15:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"skilled mighty monkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T16:15:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- skilled mighty monkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
poeryouy/blockassist-bc-roaring_flightless_ibis_1756051713
|
poeryouy
| 2025-08-24T16:09:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring flightless ibis",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T16:08:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring flightless ibis
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
franmacias/celtia-8bits
|
franmacias
| 2025-08-24T16:01:20Z | 1 | 0 | null |
[
"onnx",
"region:us"
] | null | 2025-08-19T12:17:11Z |
This is an 8-bit quantized version of the Celtia model from the Proxecto Nós [https://huggingface.co/proxectonos/Nos\_TTS-celtia-vits-graphemes](https://huggingface.co/proxectonos/Nos_TTS-celtia-vits-graphemes)
This model has been optimized to offer a significant reduction in size and memory usage, making it ideal for deployment on devices with limited resources, while maintaining high-quality audio synthesis.
**Key Features**
Model: Celtia (VITS-based)
Original Source: Nos_TTS-celtia-vits-graphemes [https://huggingface.co/proxectonos/Nos\_TTS-celtia-vits-graphemes](https://huggingface.co/proxectonos/Nos_TTS-celtia-vits-graphemes)
Optimization: INT8 quantization
Language: Galician
Function: Text-to-Speech (TTS)
---
license: cc-by-4.0
---
|
moyixiao/qwen3_0p6mimo_r32
|
moyixiao
| 2025-08-24T16:00:47Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:adapter:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"region:us"
] | null | 2025-07-17T15:40:08Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen3-0.6B-Base
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: qwen3_0p6mimo_r32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen3_0p6mimo_r32
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on the OpenMath01 and the OpenMath02 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756050004
|
Sayemahsjn
| 2025-08-24T15:57:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T15:57:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hotungmau758/blockassist-bc-reclusive_foxy_chinchilla_1756050434
|
hotungmau758
| 2025-08-24T15:56:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive foxy chinchilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T15:56:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive foxy chinchilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ProGamerGov/360-Diffusion-LoRA-sd-v1-5
|
ProGamerGov
| 2025-08-24T15:55:26Z | 0 | 46 | null |
[
"lora",
"stable-diffusion",
"text-to-image",
"equirectangular",
"360°",
"VR",
"en",
"arxiv:2106.09685",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"doi:10.57967/hf/5436",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-30T16:29:41Z |
---
license: creativeml-openrail-m
language:
- en
tags:
- lora
- stable-diffusion
- text-to-image
- equirectangular
- 360°
- VR
base_model:
- stable-diffusion-v1-5/stable-diffusion-v1-5
---
# 360 Diffusion
## 360 Diffusion v1
This [LoRA](https://arxiv.org/abs/2106.09685) model was finetuned on an extremely diverse dataset of 360° equirectangular projections with 2104 captioned training images, using the [Stable Diffusion v1-5 model](https://huggingface.co/runwayml/stable-diffusion-v1-5).
This model was finetuned with the trigger word **qxj**. If using the [AUTOMATIC1111 WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui), then you will have to append `<lora:360Diffusion_v1:1>` to the prompt as well in order to activate the model.
<div align="center">
<img src="https://huggingface.co/ProGamerGov/360-Diffusion-LoRA-sd-v1-5/resolve/main/v1_example_castle_sketch.png">
</div>
* [Full Image](https://huggingface.co/ProGamerGov/360-Diffusion-LoRA-sd-v1-5/resolve/main/v1_example_castle_sketch.png)
<div align="center">
<img src="https://huggingface.co/ProGamerGov/360-Diffusion-LoRA-sd-v1-5/resolve/main/v1_example_scifi_cockpit.png">
</div>
* [Full Image](https://huggingface.co/ProGamerGov/360-Diffusion-LoRA-sd-v1-5/resolve/main/v1_example_scifi_cockpit.png)
<div align="center">
<img src="https://huggingface.co/ProGamerGov/360-Diffusion-LoRA-sd-v1-5/resolve/main/v1_example_tropical_beach_photo.png">
</div>
* [Full Image](https://huggingface.co/ProGamerGov/360-Diffusion-LoRA-sd-v1-5/resolve/main/v1_example_tropical_beach_photo.png)
<div align="center">
<img src="https://huggingface.co/ProGamerGov/360-Diffusion-LoRA-sd-v1-5/resolve/main/v1_example_guy_standing.png">
</div>
* [Full Image](https://huggingface.co/ProGamerGov/360-Diffusion-LoRA-sd-v1-5/resolve/main/v1_example_guy_standing.png)
## Useful Tags
In order to improve usability of the model, various words and phrases were used to tag objects, scenes, style, and content. Note that these lists are based on the training data and do not include things added by the base model. These lists are also not comprehensive.
### Styles
- `photo`, `photobash`, `render`, `architectural rendering`, `illustration`, `digital illustration`, `painting`, `digital painting`, `drawing`, `watercolor painting` `concept art`, `charcoal drawing`, `sketch`, `rough sketch`, `fractal art`, `crayon drawing`, `anime`, `pixel art`
### Camera Locations
- `underwater`, `aerial view`, `interior`, `exterior`, `pov`, `street level`, `above the clouds`, `low earth orbit`, `underground`
### Locations
- `library`, `bedroom`, `bathroom`, `hallway`, `corridor`, `bridge`, `helm`, `cockpit`, `driver's seat`, `street`, `road`, `forest`, `city`, `train station`, `railway`, `greenhouse`, `residential street`, `dock`, `hanger`, `landing pad`, `ferry`, `cave`, `observatory`, `amusement park`, `waterpark`, `tunnel`, `mine`, `tropical`, `beach`, `desert`, `steep slope`, `cliff`, `ocean`, `body of water`, `river`, `mountain`, `space`, `underground bunker`, `space station`
### Skies
- `aurora borealis`, `cloudy`, `overcast sky`, `blue sky`, `stars`
### Time
- `sunset`, `sunrise` `night`, `sunny day`, `winter`, `twilight`, `fall`
### Weather
- `rain`, `raining`, `snow`, `snowing`, `fog`, `haze`, `smoke`, `storm`, `stormy`, `lightning`, `flooded`, `arid`
### Lighting
- `bright`, `dark`, `dimly lit`
### Themes
- `futuristic`, `cyberpunk`, `historical`, `messy`, `scifi`, `minimalism`, `minimalistic`, `simple`, `simplistic`, `video game`, `surrealism`, `surrealistic`, `cartoon`, `comic`, `black and white`, `smooth`, `ancient`, `medieval`, `vector art`, `abandoned`, `horror`
### Humans & Animals
- `people`, `women`, `woman`, `man`, `men`, `cat`, `dog`, `horse`, `group of`, various dinosaurs, `zombie`, `fish`, `shark`
# Rendering Tips
When rendering, it is recommended that you use either a 1:2 ratio or a perfect square. Rendering as a 1:1 square can help improve concept coherence (like the walls of a room).
Details can lose coherence at large sizes with txt2img, so it is recommended that you initially render a smaller version with at least one dimension near 512px, and then upscale it with img2img (with denoising set to 0.5) or a built in high-res fix feature.
Details can sometimes be improved by looping the output back through img2img multiple times, with a denoising of 0.5 and seed resizing.
## Seam Handling
As Stable Diffusion only renders squares and rectangles, any equirectangular projections will have edges that may not fully match the other side. While these seams are generally pretty minimal, there are multiple ways to deal with them:
* Using the [asymmetric-tiling](https://github.com/tjm35/asymmetric-tiling-sd-webui) extension's x-axis tiling feature can help eliminate seams entirely, but the extension can significantly degrade output. It is recommended that you set the 'Start tiling from step N' setting to start at around 50% in order to minimize the impact (ex: start at 9 if using 20 steps).
* Inpainting can be used across the seam after shifting the image horizontally to the right or left.
* [GIMP](https://www.gimp.org/) (potentially with [G'MIC](https://gmic.eu/)) or Photoshop can be used to remove the seams.
# Viewing 360 images
The images created with this model are meant to be viewed by 360° viewers and thus will have weird distortions when viewed in 2D. Therefore, the following viewers are recommended:
Website (supports VR headsets): https://renderstuff.com/tools/360-panorama-web-viewer/
AUTOMATIC1111 WebUI Extension: https://github.com/GeorgLegato/sd-webui-panorama-viewer
WebUI Extension for converting your renders to stereoscopic 3D images: https://github.com/thygate/stable-diffusion-webui-depthmap-script
### Example Image Models
- Landscape renders used: https://civitai.com/models/4384/dreamshaper
- Renders of people used: https://civitai.com/models/4823/deliberate
|
poeryouy/blockassist-bc-iridescent_aquatic_parrot_1756050859
|
poeryouy
| 2025-08-24T15:55:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent aquatic parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T15:54:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent aquatic parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
khangnguyen0/blockassist-bc-tawny_untamed_leopard_1756050691
|
khangnguyen0
| 2025-08-24T15:53:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tawny untamed leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T15:53:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tawny untamed leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
UmeAiRT/ComfyUI-Auto_installer
|
UmeAiRT
| 2025-08-24T15:46:47Z | 228,091 | 97 |
diffusers
|
[
"diffusers",
"onnx",
"safetensors",
"gguf",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-09-26T13:03:26Z |
---
license: mit
---
# UmeAiRT - ComfyUI auto installer
I'm sharing with you my installation script, which automatically provides ComfyUI, workflows, models, custom nodes ...
Just run "ComfyUI-AllinOne-Auto_install.bat".
With a few questions at the beginning of the script, only the desired elements will be downloaded.
### Prerequisites :
- [7zip](others/7z2409-x64.exe)
- [git](others/Git-2.49.0-64-bit.exe)
- [CUDA 12.9](others/cuda_12.9.1_windows_network.exe)
### What's included :
#### ComfyUI :
- ComfyUI portable version pytorch 2.7.0+cu128
- ComfyUI Manager
- Interface settings
- Xformers
- Nvidia Apex
- Sageattention
- Triton
#### Workflow :
- TXT to IMG
- IMG to IMG
- INPAINT
- OUTPAINT
- PulID & REDUX
- ControlNet HED/Canny/Openpose/Depth
- TXT to VIDEO
- IMG to VIDEO
- StartEndFrames
- Face to VIDEO
- VIDEO EXTENSION
- VIDEO to LOOP
- Frames interpolations
- Upscaler
- Video merge
#### WAN2.1 :
- T2V Model
- I2V Model
- T2V GGUF Model
- I2V GGUF Model
- CLIP
- CLIP Vision
- VAE
#### Flux1 :
- flux1-dev
- flux1-schnell-fp8
- GGUF
- clip_l
- t5xxl
- VAE
- ControlNet HED/Canny/Openpose/Depth
### Upscale Model :
- RealESRGAN_x4plus.pth
- RealESRGAN_x4plus_anime_6B.pth
### Custom Nodes :
- ComfyUI-Custom-Scripts
- ComfyUI-GGUF
- ComfyUI-KJNodes
- ComfyUI-VideoHelperSuite
- ComfyUI-mxToolkit
- ComfyUI-HunyuanVideoMultiLora
- rgthree-comfy
- ComfyUI-Frame-Interpolation
- ComfyUI Impact Pack
- ComfyUI-Easy-Use
- ComfyUI_PuLID_Flux_ll
- WAS Node Suite
- ComfyUI-Florence2
- ComfyUI-Upscaler-Tensorrt
- ComfyUI-MultiGPU
- ComfyUI-WanStartEndFramesNative
![alt text][logo]
[logo]: images/UmeAiRT.png "UmeAiRT logo"
|
SicariusSicariiStuff/Eximius_Persona_5B
|
SicariusSicariiStuff
| 2025-08-24T15:43:50Z | 12 | 5 | null |
[
"safetensors",
"llama",
"merge",
"en",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2025-01-21T09:07:33Z |
---
license: llama3.2
language:
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
tags:
- merge
---
<div align="center">
<b style="font-size: 40px;">Eximius_Persona_5B</b>
</div>
<img src="https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B/resolve/main/Images/Eximius_Persona_5B.png" alt="Eximius_Persona_5B" style="width: 70%; min-width: 500px; display: block; margin: auto;">
---
<style>
.hf-links, .hf-tldr{
display:flex;justify-content:center;align-items:center;flex-wrap:wrap;
gap:14px;margin:16px 0;
}
.hf-links a, .hf-tldr a{
display:flex;flex-direction:column;align-items:center;justify-content:center;
text-align:center;text-decoration:none;font-weight:700;line-height:1.15;
padding:10px 16px;border-radius:14px;border:2px solid currentColor;
transition:transform .15s ease,box-shadow .15s ease,background-color .15s ease,color .15s ease;
}
.hf-tldr a{
font-size:48px;color:purple;min-width:100%;
}
.hf-tldr a:hover{
transform:translateY(-2px);
background:rgba(128,0,128,.1);
box-shadow:0 8px 22px rgba(128,0,128,.45);
color:#fff;
}
.hf-links a{
font-size:20px;min-width:240px;max-width:280px;
}
.hf-links a .top{font-size:16px;opacity:.9;}
.hf-links a .bottom{font-size:20px;}
.hf-links a.red{color:#E31515;}
.hf-links a.yellow{color:#FFC800;}
.hf-links a.green{color:#64FF00;}
.hf-links a:hover{
transform:translateY(-1px);
background:rgba(255,255,255,0.04);
box-shadow:0 6px 18px rgba(0,0,0,.15), inset 0 0 0 9999px rgba(255,255,255,.02);
}
.hf-links a.red:hover{
background:rgba(227,21,21,.12);
box-shadow:0 8px 20px rgba(227,21,21,.35);
color:#fff;
}
.hf-links a.yellow:hover{
background:rgba(255,200,0,.15);
box-shadow:0 8px 20px rgba(255,200,0,.35);
color:#111;
}
.hf-links a.green:hover{
background:rgba(100,255,0,.14);
box-shadow:0 8px 20px rgba(100,255,0,.35);
color:#093;
}
/* mobile stacking */
@media (max-width:520px){
.hf-links a{min-width:100%;max-width:100%;}
.hf-tldr a{font-size:36px;}
}
</style>
<div class="hf-tldr">
<a href="https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B#tldr">
Click here for TL;DR
</a>
</div>
---
<div class="hf-links">
<a class="red" href="https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B#available-quantizations">
<span class="top">Click here</span>
<span class="bottom">for quantizations</span>
</a>
<a class="yellow" href="https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B#recommended-settings-for-assistant-mode">
<span class="top">Click here</span>
<span class="bottom">for recommended settings</span>
</a>
<a class="green" href="https://ko-fi.com/sicarius">
<span class="top">Click here</span>
<span class="bottom">to buy me a coffee</span>
</a>
</div>
---
I wanted to create a model with an **exceptional** capacity for using varied speech patterns and **fresh** role-play takes. The model had to have a unique personality, not on a surface level but on the inside, **for real**. Unfortunately, SFT alone just didn't cut it. And I had only 16GB of VRAM at the time. Oh, and I wanted it to be small enough to be viable for phones and to be able to give a fight to larger models while at it. If only there was a magical way to do it.
**Merges**. Merges are quite unique. In the early days, they were considered "fake." Clearly, there's no such thing as merges. Where are the papers? No papers? Then it's clearly impossible. "Mathematically impossible." Simply preposterous. To mix layers and hope for a coherent output? What nonsense!
And yet, they were **real**. <a href="https://huggingface.co/Undi95">Undi95</a> made some of the earliest merges I can remember, and the "LLAMA2 Era" was truly amazing and innovative thanks to them. Cool stuff like <a href="https://huggingface.co/KoboldAI/LLaMA2-13B-TiefighterLR">Tiefighter</a> was being made, and eventually the time tested <a href="https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5">Midnight-Miqu-70B (v1.5 is my personal favorite)</a>.
Merges are an interesting thing, as they affect LLMs in a way that is currently **impossible** to reproduce using **SFT** (or any 'SOTA' technique). One of the plagues we have today, while we have orders of magnitude smarter LLMs, is **GPTisms** and **predictability**. Merges can potentially 'solve' that. How? In short, if you physically tear neurons (**passthrough** brain surgery) while you somehow manage to keep the model coherent enough, and if you're lucky, it can even follows instructions- then magical stuff begins to happen.
Magic, because it's **not** an exact science, there's some art to it, as it is done with a lot of **intuition**. GPTisms are patterns that the model really **really** "wants" to follow, it's quite hard to dissuade it. But if you yeet a couple of layers and rearrange them, boy does it get hard to spew those shivers down the spine... and instead the model starts spewing stuff that it was never intended to. It breaks its patterns and introduces some healthy chaos into the mix.
This model, **Eximius_Persona_5B**, is the result of multiple merges, that have been tuned, then merged again, then... for many times and iterations. The base was LLAMA 3.2 3B and I focused on achieving the following **4 traits**, in that specific order:
- **2nd Highest rated model** in the 3-6B category according to a closed external benchmark. See details at the buttom of the page.
- Varied speech patterns
- Roleplay ability
- Long context coherency
- Instruction following
For me, getting varied speech patterns was more important than instruction following, for instruction following we got API models, or LLAMA 3.3. Many models are excellent assistants, yet they all sound pretty much the same.
I also wanted to make use of my **4090m 16GB** while my workstation crunches **Phi-4'** brain. Making a nice 5B model aligns with my goal of making AI accessible and fun for everyone, and hence **Eximius_Persona_5B** was born. Let this also be a call to action for more people to make AI models, you don't have to have multiple GPUs or spend a fortune on the cloud (although that definitely opens up options), you can do plenty with a mere 16GB of VRAM. And in case 16GB seems out of reach too, I should mention that Google Collab gives access to a free T4.
I uploaded a more funky, less stable, and thiccer version of Eximius_Persona to my prototyping org here:
[Eximius_Persona with 84 Layers from various checkpoints](https://huggingface.co/Sicarius-Prototyping/Eximius_Persona_84L)
(from some early tests, occasionally it outputs stories that fool GPTZERO that it was written by a human- **60% human**, 40% AI with a lucky roll)
<details>
<summary><b>See example:</b></summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B/resolve/main/Images/Eximius_Persona_5B_GPTZERO.png" alt="GPTZERO Example" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
---
### TL;DR
- **Fun & Fresh Roleplay** flavour.
- **Interesting speech patterns** in creative writing.
- **Good long context coherency** in Roleplay.
- **Occasionally** outputs quite **human like** stories.
- **50 Layers** LLAMA 3.2, fully coherent.
- **Strong performance** in general for a **5B model**.
### Important: Make sure to use the correct settings!
[Assistant settings](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B#recommended-settings-for-assistant-mode)
[Roleplay settings](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B#recommended-settings-for-roleplay-mode)
---
## Available quantizations:
- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B)
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B_GGUF) | [iMatrix_GGUF](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B_iMatrix)
- EXL2: [3.5 bpw](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B-3.5bpw) | [4.0 bpw](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B-4.0bpw) | [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B-5.0bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B-6.0bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B-7.0bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B-8.0bpw)
- Specialized: [FP8](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B_FP8)
---
## Model Details
- Intended use: **Role-Play**, **Creative Writing**, General Tasks.
- Censorship level: <b>Medium</b>
- **5 / 10** (10 completely uncensored)
## UGI score:
<img src="https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B/resolve/main/Images/Eximius_Persona_5B_UGI.png" alt="UGI Score" style="width: 100%; min-width: 700px; display: block;">
### Don't use it for coding :)
---
# Regarding the format:
It is **HIGHLY RECOMMENDED** to use the **Roleplay \ Adventure format the model was trained on**, see the examples below for syntax. It allows for a **very fast and easy** writing of character cards with **minimal amount of tokens**. It's a modification of an old-skool CAI style format I call **SICAtxt** (**S**imple, **I**nexpensive **C**haracter **A**ttributes plain-text):
---
## **SICAtxt** for **roleplay**:
```
X's Persona: X is a .....
Traits:
Likes:
Dislikes:
Quirks:
Goals:
Dialogue example
```
## **SICAtxt** for **Adventure:**
```
Adventure: <short description>
$World_Setting:
$Scenario:
```
---
# Model instruction template: Llama-3-Instruct
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
---
<h2 style="color: darkorange; font-weight: bold; font-size: 55px; text-align: center;">Roleplay format: Classic Internet RP</h2>
```
*action* speech *narration*
```
### The model is pretty smart, so it might handle other formats as well, but it was trained and tested specifically with the classic internet RP style in mind.
## Recommended settings for assistant mode
<details>
<summary>Full generation settings: <b>Debug Deterministic</b>.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/Debug-deterministic.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
<details>
<summary>Full generation settings: <b>min_p</b>.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/min_p.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
---
## Recommended settings for Roleplay mode
<details>
<summary><b>Roleplay settings:</b>.</summary>
A good repetition_penalty range is <b>between 1.12 - 1.15</b>, feel free to experiment.
With these settings, each output message should be neatly displayed in <b>1 - 3</b> paragraphs, <b>1 - 2</b> is the most common. A single paragraph will be output as a response to a simple message ("What was your name again?").
<b>min_P</b> for RP works too but is more likely to put everything under one large paragraph, instead of a neatly formatted short one. Feel free to switch in between.
<b>(Open the image in a new window to better see the full details)</b>
<img src="https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B/resolve/main/Presets/Negative_LLAMA_70B_RP.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
```
temperature: 0.8
top_p: 0.95
top_k: 25
typical_p: 1
min_p: 0
repetition_penalty: 1.12
repetition_penalty_range: 1024
```
</details>
---
**Other recommended generation Presets:**
<details>
<summary><b>Midnight Enigma</b></summary>
```
max_new_tokens: 512
temperature: 0.98
top_p: 0.37
top_k: 100
typical_p: 1
min_p: 0
repetition_penalty: 1.18
do_sample: True
```
</details>
<details>
<summary><b>Divine Intellect</b></summary>
```
max_new_tokens: 512
temperature: 1.31
top_p: 0.14
top_k: 49
typical_p: 1
min_p: 0
repetition_penalty: 1.17
do_sample: True
```
</details>
<details>
<summary><b>simple-1</b></summary>
```
max_new_tokens: 512
temperature: 0.7
top_p: 0.9
top_k: 20
typical_p: 1
min_p: 0
repetition_penalty: 1.15
do_sample: True
```
</details>
---
<h2 style="color: green; font-weight: bold; font-size: 65px; text-align: center;">Your support = more models</h2>
<a href="https://ko-fi.com/sicarius" style="color: pink; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">My Ko-fi page (Click here)</a>
---
## Benchmarks
| Metric |Value|
|-------------------|----:|
|Avg. |21.78|
|IFEval (0-Shot) |65.60|
|BBH (3-Shot) |22.20|
|MATH Lvl 5 (4-Shot)| 9.89|
|GPQA (0-shot) | 1.90|
|MuSR (0-shot) | 7.33|
|MMLU-PRO (5-shot) |23.78|
---
# Additional benchmarks
On the **17th of February, 2025**, I became aware that the model was ranked as the **2nd place in the world** among **3-6B** models, in a closed external benchmark.
Bnechmarked on the following site:
```
https://moonride.hashnode.dev/biased-test-of-gpt-4-era-llms-300-models-deepseek-r1-included
```
<img src="https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B/resolve/main/Images/Eximius_Persona_5B_Bench.png" alt="External Benchmark" style="width: 100%; min-width: 600px; display: block; margin: auto;">
---
## Citation Information
```
@llm{Eximius_Persona_5B,
author = {SicariusSicariiStuff},
title = {Eximius_Persona_5B},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B}
}
```
---
## Other stuff
- [SLOP_Detector](https://github.com/SicariusSicariiStuff/SLOP_Detector) Nuke GPTisms, with SLOP detector.
- [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) The grand project that started it all.
- [Blog and updates (Archived)](https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates) Some updates, some rambles, sort of a mix between a diary and a blog.
|
Jeff876/qaoa-portfolio-space
|
Jeff876
| 2025-08-24T15:42:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-24T15:37:51Z |
# QAOA Portfolio Optimizer (Hugging Face Space)
This Space builds a mean–variance portfolio selection **QUBO** and solves it with a **QAOA** variational circuit (PennyLane).
Optionally verifies the solution with a **classical brute-force** when the asset count is small.
## How to use
1. Click "Run QAOA".
2. With no file uploaded, a 6-asset demo runs.
3. Or upload a CSV of prices:
- First column: `Date`
- Other columns: tickers (closing prices)
4. Adjust:
- Risk aversion `λ`
- Target picks `k`
- Penalty `α`
- QAOA depth `p`, Steps, Shots
5. Inspect logs, JSON, and selection table.
## Data assumptions
- We compute annualized mean log-returns and covariance from your prices.
- Values are illustrative only; do your own backtesting before any real use.
## Local run
```bash
pip install -r requirements.txt
python app.py
|
SicariusSicariiStuff/Fiendish_LLAMA_3B
|
SicariusSicariiStuff
| 2025-08-24T15:31:56Z | 58 | 9 | null |
[
"safetensors",
"llama",
"en",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2025-03-20T03:06:12Z |
---
license: llama3.2
language:
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
---
<div align="center">
<b style="font-size: 40px;">Fiendish_LLAMA_3B</b>
</div>
<img src="https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B/resolve/main/Images/Fiendish_LLAMA_3B.png" alt="Fiendish_LLAMA_3B" style="width: 100%; min-width: 700px; display: block; margin: auto;">
---
<style>
.hf-links, .hf-tldr{
display:flex;justify-content:center;align-items:center;flex-wrap:wrap;
gap:14px;margin:16px 0;
}
.hf-links a, .hf-tldr a{
display:flex;flex-direction:column;align-items:center;justify-content:center;
text-align:center;text-decoration:none;font-weight:700;line-height:1.15;
padding:10px 16px;border-radius:14px;border:2px solid currentColor;
transition:transform .15s ease,box-shadow .15s ease,background-color .15s ease,color .15s ease;
}
.hf-tldr a{
font-size:48px;color:purple;min-width:100%;
}
.hf-tldr a:hover{
transform:translateY(-2px);
background:rgba(128,0,128,.1);
box-shadow:0 8px 22px rgba(128,0,128,.45);
color:#fff;
}
.hf-links a{
font-size:20px;min-width:240px;max-width:280px;
}
.hf-links a .top{font-size:16px;opacity:.9;}
.hf-links a .bottom{font-size:20px;}
.hf-links a.red{color:#E31515;}
.hf-links a.yellow{color:#FFC800;}
.hf-links a.green{color:#64FF00;}
.hf-links a:hover{
transform:translateY(-1px);
background:rgba(255,255,255,0.04);
box-shadow:0 6px 18px rgba(0,0,0,.15), inset 0 0 0 9999px rgba(255,255,255,.02);
}
.hf-links a.red:hover{
background:rgba(227,21,21,.12);
box-shadow:0 8px 20px rgba(227,21,21,.35);
color:#fff;
}
.hf-links a.yellow:hover{
background:rgba(255,200,0,.15);
box-shadow:0 8px 20px rgba(255,200,0,.35);
color:#111;
}
.hf-links a.green:hover{
background:rgba(100,255,0,.14);
box-shadow:0 8px 20px rgba(100,255,0,.35);
color:#093;
}
/* mobile stacking */
@media (max-width:520px){
.hf-links a{min-width:100%;max-width:100%;}
.hf-tldr a{font-size:36px;}
}
</style>
<div class="hf-tldr">
<a href="https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B#tldr">
Click here for TL;DR
</a>
</div>
---
<div class="hf-links">
<a class="red" href="https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B#available-quantizations">
<span class="top">Click here</span>
<span class="bottom">for quantizations</span>
</a>
<a class="yellow" href="https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B#recommended-settings-for-assistant-mode">
<span class="top">Click here</span>
<span class="bottom">for recommended settings</span>
</a>
<a class="green" href="https://ko-fi.com/sicarius">
<span class="top">Click here</span>
<span class="bottom">to buy me a coffee</span>
</a>
</div>
---
When innocence fades, \
And then goes away— \
A new fiendish purpose— guides its way.
Once [impish](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_3B), now fiendish, for many to play, \
Three billion parameters of slop underway…
From an [impish](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_3B) design— with a quite wholesome tune, \
**This** fiendish bitch, was made just to goon.
---
# Included Character cards in this repo:
- [Shmena Koeset](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B/resolve/main/Character_Cards/Shmena_Koeset.png) (An overweight and foul-mouthed **troll huntress** with a bad temper.)
---
# Other character cards:
- [Takai_Puraisu](https://huggingface.co/SicariusSicariiStuff/Oni_Mitsubishi_12B/resolve/main/Character_Cards/Takai_Puraisu.png) (Car dealership simulator)
- [Vesper](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B/resolve/main/Character_Cards/Vesper.png) (Schizo **Space Adventure**)
- [Nina_Nakamura](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B/resolve/main/Character_Cards/Nina_Nakamura.png) (The **sweetest** dorky co-worker)
- [Employe#11](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B/resolve/main/Character_Cards/Employee%2311.png) (**Schizo workplace** with a **schizo worker**)
---
### TL;DR
- **[Impish_LLAMA_3B](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_3B)**'s naughty sister. Less wholesome, more edge. **NOT** better, but **different**.
- **Superb Roleplay** for a **3B** size.
- **Short length** response (1-2 paragraphs, usually 1), CAI style.
- **Naughty, and more evil** that follows instructions well enough, and keeps good formatting.
- **LOW refusals** - Total freedom in RP, can do things other RP models won't, and I'll leave it at that. Low refusals in assistant tasks as well.
- **VERY good** at following the **character card**. Try the included characters if you're having sub optimal results.
### Important: Make sure to use the correct settings!
[Assistant settings](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B#recommended-settings-for-assistant-mode)
[Roleplay settings](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B#recommended-settings-for-roleplay-mode)
---
## Available quantizations:
- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B)
- GGUF & iMatrix: [GGUF](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B_GGUF) | [iMatrix](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B_iMatrix) | [High-Attention](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B_GGUF_HA) | [iMatrix-High-Attention](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B_HA_NL)
- EXL2: [3.5 bpw](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B-3.5bpw) | [4.0 bpw](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B-4.0bpw) | [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B-5.0bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B-6.0bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B-7.0bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B-8.0bpw)
- GPTQ: [4-Bit-128](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B_GPTQ-4-bit-128)
- Specialized: [FP8](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B_FP8)
- Mobile (ARM): [Q4_0](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B_ARM) | [Q4_0_High-Attention](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B_ARM_HA)
---
## Model Details
- Intended use: **Role-Play**, **Creative Writing**, **General Tasks**.
- Censorship level: <b>Medium</b>
- **4.5 / 10** (10 completely uncensored)
## UGI score:
<img src="https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B/resolve/main/Images/UGI.png" style="width: 100%; min-width: 700px; display: block; margin: auto;">
---
## Recommended settings for assistant mode
<details>
<summary>Full generation settings: <b>Debug Deterministic</b>.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/Debug-deterministic.png" alt="Debug Deterministic_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
<details>
<summary>Full generation settings: <b>min_p</b>.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/min_p.png" alt="min_P_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
---
## Recommended settings for Roleplay mode
---
<h2 style="color: green; font-weight: bold; font-size: 36px; text-align: center;">Settings for RP, click below to expand:</h2>
<details>
<summary><b>Roleplay settings:</b></summary>
A good repetition_penalty range is <b>between 1.12 - 1.15</b>, feel free to experiment.
With these settings, each output message should be neatly displayed in <b>1 - 5</b> paragraphs, <b>2 - 3</b> is the most common. A single paragraph will be output as a response to a simple message ("What was your name again?").
<b>min_P</b> for RP works too but is more likely to put everything under one large paragraph, instead of a neatly formatted short one. Feel free to switch in between.
<b>(Open the image in a new window to better see the full details)</b>
<img src="https://huggingface.co/SicariusSicariiStuff/Oni_Mitsubishi_12B/resolve/main/Presets/Oni_Mitsubishi_12B_RP.png" alt="Oni_Mitsubishi_12B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
```
temperature: 0.8
top_p: 0.95
top_k: 25
typical_p: 1
min_p: 0
repetition_penalty: 1.12
repetition_penalty_range: 1024
```
</details>
<h2 style="color: darkorange; font-weight: bold; font-size: 65px; text-align: center;">Roleplay format: Classic Internet RP</h2>
```
*action* speech *narration*
```
- **min_p** will bias towards a **single big paragraph**.
- The recommended RP settings will bias towards **1-3 small paragraphs** (on some occasions 4-5)
---
# Regarding the format:
It is **HIGHLY RECOMMENDED** to use the **Roleplay \ Adventure format the model was trained on**, see the examples below for syntax. It allows for a **very fast and easy** writing of character cards with **minimal amount of tokens**. It's a modification of an old-skool CAI style format I call **SICAtxt** (**S**imple, **I**nexpensive **C**haracter **A**ttributes plain-text):
---
## **SICAtxt** for **roleplay**:
```
X's Persona: X is a .....
Traits:
Likes:
Dislikes:
Quirks:
Goals:
Dialogue example
```
## **SICAtxt** for **Adventure:**
```
Adventure: <short description>
$World_Setting:
$Scenario:
```
---
# Model instruction template: Llama-3-Instruct
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
---
**Other recommended generation Presets:**
<details>
<summary><b>Midnight Enigma</b></summary>
```
max_new_tokens: 512
temperature: 0.98
top_p: 0.37
top_k: 100
typical_p: 1
min_p: 0
repetition_penalty: 1.18
do_sample: True
```
</details>
<details>
<summary><b>Divine Intellect</b></summary>
```
max_new_tokens: 512
temperature: 1.31
top_p: 0.14
top_k: 49
typical_p: 1
min_p: 0
repetition_penalty: 1.17
do_sample: True
```
</details>
<details>
<summary><b>simple-1</b></summary>
```
max_new_tokens: 512
temperature: 0.7
top_p: 0.9
top_k: 20
typical_p: 1
min_p: 0
repetition_penalty: 1.15
do_sample: True
```
</details>
---
<h2 style="color: green; font-weight: bold; font-size: 65px; text-align: center;">Your support = more models</h2>
<a href="https://ko-fi.com/sicarius" style="color: pink; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">My Ko-fi page (Click here)</a>
---
## Citation Information
```
@llm{Fiendish_LLAMA_3B,
author = {SicariusSicariiStuff},
title = {Fiendish_LLAMA_3B},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B}
}
```
---
## Other stuff
- [SLOP_Detector](https://github.com/SicariusSicariiStuff/SLOP_Detector) Nuke GPTisms, with SLOP detector.
- [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) The grand project that started it all.
- [Blog and updates (Archived)](https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates) Some updates, some rambles, sort of a mix between a diary and a blog.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.