modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Muapi/urban-decay-reclaimed-by-nature
|
Muapi
| 2025-08-25T07:38:58Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T07:38:41Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Urban Decay: Reclaimed by Nature

**Base model**: Flux.1 D
**Trained words**: abndnd_plcs_rclmd
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:863683@966412", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/unfazed-cybrsync
|
Muapi
| 2025-08-25T07:38:13Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T07:37:50Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Unfazed CybrSync

**Base model**: Flux.1 D
**Trained words**: cyberpunk, cybernetic, Cyberware, cybernetic lines,, cyborg, exposed mechanics, mechanical parts, robot joints, cable, CybrSync
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1343679@2019567", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
xinghua001/oxford-pet-segmentation
|
xinghua001
| 2025-08-25T07:36:51Z | 0 | 0 |
segmentation-models-pytorch
|
[
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] |
image-segmentation
| 2025-08-25T07:36:45Z |
---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# FPN Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "resnet34",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_pyramid_channels": 256,
"decoder_segmentation_channels": 128,
"decoder_merge_policy": "add",
"decoder_dropout": 0.2,
"decoder_interpolation": "nearest",
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.9079743027687073,
"test_dataset_iou": 0.9152143001556396
}
]
```
## Dataset
Dataset name: Oxford Pet
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1756107321
|
IvanJAjebu
| 2025-08-25T07:36:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:36:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ShihteSiao/Talkia_LoRA
|
ShihteSiao
| 2025-08-25T07:34:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T11:01:57Z |
---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ShihteSiao
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756107243
|
Ferdi3425
| 2025-08-25T07:34:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:34:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
shigure451/1
|
shigure451
| 2025-08-25T07:34:26Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-25T07:33:43Z |
---
license: apache-2.0
---
|
hoan17/saving_LOVv2e500s50_100
|
hoan17
| 2025-08-25T07:34:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"trl",
"o2o",
"reinforcement-learning",
"text-to-image",
"stable-diffusion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-08-25T07:33:55Z |
---
license: apache-2.0
tags:
- trl
- o2o
- diffusers
- reinforcement-learning
- text-to-image
- stable-diffusion
---
# TRL O2O Model
This is a diffusion model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1756105592
|
helmutsukocok
| 2025-08-25T07:31:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:30:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
shalinm/Kemcho-Gemma-3-1B-IT
|
shalinm
| 2025-08-25T07:29:51Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"gemma",
"gemma-3",
"google",
"merged",
"finetuned",
"instruction-tuning",
"sft",
"gujarati",
"bf16",
"conversational",
"gu",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T18:29:42Z |
---
language:
- gu
license: gemma
library_name: transformers
pipeline_tag: text-generation
base_model: google/gemma-3-1b-it
tags:
- gemma
- gemma-3
- google
- merged
- finetuned
- instruction-tuning
- sft
- gujarati
- bf16
model_type: gemma3
---
## Kemcho-Gemma-3-1B-IT
Kemcho-Gemma-3-1B-IT is a fine-tuned model from `google/gemma-3-1b-it` for Gujarati instruction following and general assistant tasks. The training used LoRA and was merged into the base weights for single-repo deployment.
- Architecture: Gemma 3 (1B)
- Weights: `model.safetensors` (~4 GB, bf16 recommended)
- Data: [Alpaca-Gujarati](https://huggingface.co/datasets/Tensoic/Alpaca-Gujarati) (~10,000 example subset)
- Best for: Gujarati chat, rewriting, summarization, simple Q&A
### Quick Start
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "shalinm/Kemcho-Gemma-3-1B-IT"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "તમે મદદરૂપ ગુજરાતી સહાયક છો."},
{"role": "user", "content": "આપેલ વાક્યોને એક વાક્યમાં ભેગા કરો.\n\nહું કામ માટે મોડો પડ્યો હતો. મારે મારા બાળકોને શાળાએ લઈ જવાનું હતું."},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=256,
do_sample=True,
temperature=0.7,
top_p=0.9,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Model Details
- Base model: `google/gemma-3-1b-it`
- Training method: SFT using LoRA, then merged into base weights
- LoRA config (pre-merge):
- rank: 16
- alpha: 32
- dropout: 0.1
- target modules: attention `q_proj`, `k_proj`, `v_proj`, `o_proj`, mlp `up_proj`, `down_proj`,`gate_proj`
- Trainable params (pre-merge): ~9.69M of ~1.01B (~0.96%)
- Precision: trained/evaluated in bfloat16
---
## Training
- Hardware: 1 x A40
- Epochs: 3
- Logged steps: 300
### Training/Validation Loss (snapshot)
| step | train_loss | val_loss |
| ---: | ---------: | -------: |
| 300 | 1.1884 | 1.7497 |
---
## Intended Uses and Limitations
- Intended: Gujarati assistant tasks—Q&A, rewriting, summarization, everyday instructions.
- Not intended: Safety-critical uses, factual lookup without verification, generating harmful content.
### Limitations
- May hallucinate facts or reflect biases in data.
- Non-Gujarati performance is not specifically tuned.
### Safety
- Use with content filtering and human oversight for sensitive domains.
- Consider additional alignment or safety fine-tuning for production.
---
## License
- Weights derived from `google/gemma-3-1b-it`. Use is subject to the Gemma license and terms set by Google.
- Ensure compliance with the licenses of any datasets used during fine-tuning.
|
taewan2002/davla_libero_spatial
|
taewan2002
| 2025-08-25T07:29:41Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:aopolin-lv/libero_spatial_no_noops_lerobot_v21",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-25T07:29:24Z |
---
base_model: lerobot/smolvla_base
datasets: aopolin-lv/libero_spatial_no_noops_lerobot_v21
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- smolvla
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
chainway9/blockassist-bc-untamed_quick_eel_1756105332
|
chainway9
| 2025-08-25T07:28:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:28:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756106845
|
Ferdi3425
| 2025-08-25T07:28:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:27:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lemonhat/Qwen2.5-Coder-7B-Instruct-t1_5k_v1_tag5mini
|
lemonhat
| 2025-08-25T07:27:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T06:42:23Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: t1_5k_v1_tag5mini
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t1_5k_v1_tag5mini
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the t1_5k_v1_tag5mini dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5022 | 0.5435 | 100 | 0.3715 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
koloni/blockassist-bc-deadly_graceful_stingray_1756105219
|
koloni
| 2025-08-25T07:26:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:26:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
smirki/UIGEN-FX-4B-08-25-lora
|
smirki
| 2025-08-25T07:25:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T07:25:25Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
2hpsatt/blockassist-bc-huge_deft_eagle_1756106370
|
2hpsatt
| 2025-08-25T07:20:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:20:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gtxsa119/blockassist-bc-peaceful_monstrous_pelican_1756105411
|
gtxsa119
| 2025-08-25T07:19:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful monstrous pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:19:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful monstrous pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ymroddi/spanish-langa-270m
|
ymroddi
| 2025-08-25T07:19:18Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-24T20:39:03Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: spanish-langa-270m
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for spanish-langa-270m
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ymroddi/spanish-langa-270m", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756106190
|
liukevin666
| 2025-08-25T07:17:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:17:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zantag/gemma-finetune-gguf
|
zantag
| 2025-08-25T07:17:13Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"gemma3_text",
"en",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T07:17:12Z |
---
base_model: unsloth/gemma-3-270m-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** zantag
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-270m-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
martenfi/yelp_review_classifier
|
martenfi
| 2025-08-25T07:16:49Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-22T13:40:43Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: yelp_review_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yelp_review_classifier
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7928
- Accuracy: 0.6679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.8515 | 1.0 | 81250 | 0.8762 | 0.6361 |
| 0.7617 | 2.0 | 162500 | 0.8086 | 0.6579 |
| 0.7247 | 3.0 | 243750 | 0.7928 | 0.6679 |
### Framework versions
- Transformers 4.55.3
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756106156
|
Ferdi3425
| 2025-08-25T07:16:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:16:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
huyhoangnguyen/final-project
|
huyhoangnguyen
| 2025-08-25T07:15:45Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"region:us"
] | null | 2025-08-19T09:47:44Z |
---
tags:
- generated_from_trainer
model-index:
- name: lavy-Instruct_finding
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lavy-Instruct_finding
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4235 | 1.25 | 100 | 0.4226 |
| 0.2433 | 2.5 | 200 | 0.3359 |
| 0.2037 | 3.75 | 300 | 0.3580 |
| 0.1627 | 5.0 | 400 | 0.4291 |
| 0.1246 | 6.25 | 500 | 0.3665 |
| 0.1207 | 7.5 | 600 | 0.3746 |
| 0.1086 | 8.75 | 700 | 0.3702 |
| 0.102 | 10.0 | 800 | 0.3735 |
| 0.0877 | 11.25 | 900 | 0.4204 |
| 0.0698 | 12.5 | 1000 | 0.4352 |
| 0.064 | 13.75 | 1100 | 0.4457 |
| 0.0714 | 15.0 | 1200 | 0.4672 |
| 0.0481 | 16.25 | 1300 | 0.4870 |
| 0.046 | 17.5 | 1400 | 0.5235 |
| 0.0412 | 18.75 | 1500 | 0.5465 |
| 0.0379 | 20.0 | 1600 | 0.5507 |
### Framework versions
- PEFT 0.5.0
- Transformers 4.37.2
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.15.2
|
internlm/Intern-S1-mini
|
internlm
| 2025-08-25T07:15:02Z | 2,153 | 75 |
transformers
|
[
"transformers",
"safetensors",
"interns1",
"text-generation",
"image-text-to-text",
"conversational",
"custom_code",
"arxiv:2508.15763",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-18T06:32:48Z |
---
license: apache-2.0
pipeline_tag: image-text-to-text
library_name: transformers
---
## Intern-S1-mini
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/642695e5274e7ad464c8a5ba/E43cgEXBRWjVJlU_-hdh6.png" />
<div> </div>
[💻Github Repo](https://github.com/InternLM/Intern-S1) • [🤗Model Collections](https://huggingface.co/collections/internlm/intern-s1-6882e325e8ac1c58ba108aa5) • [📜Technical Report](https://arxiv.org/abs/2508.15763) • [🏠Project Page](https://chat.intern-ai.org.cn/)
</div>
<p align="center">
👋 join us on <a href="https://discord.gg/xa29JuW87d" target="_blank">Discord</a> and <a href="https://cdn.vansin.top/intern-s1.jpg" target="_blank">WeChat</a>
</p>
## Introduction
We introduce **Intern-S1-mini**, a lightweight open-source multimodal reasoning model based on the same techniques as **[Intern-S1](https://huggingface.co/internlm/Intern-S1)**.
Built upon an 8B dense language model (Qwen3) and a 0.3B Vision encoder (InternViT), Intern-S1-mini has been further pretrained on **5 trillion tokens** of multimodal data, including over **2.5 trillion scientific-domain tokens**. This enables the model to retain strong general capabilities while excelling in specialized scientific domains such as **interpreting chemical structures, understanding protein sequences, and planning compound synthesis routes**, making Intern-S1-mini to be a capable research assistant for real-world scientific applications.
## Features
- Strong performance across language and vision reasoning benchmarks, especially scientific tasks.
- Continuously pretrained on a massive 5T token dataset, with over 50% specialized scientific data, embedding deep domain expertise.
- Dynamic tokenizer enables native understanding of molecular formulas and protein sequences.
## Performance
We evaluate the Intern-S1-mini on various benchmarks including general datasets and scientific datasets. We report the performance comparison with the recent VLMs and LLMs below.
| | | Intern-S1-mini | Qwen3-8B | GLM-4.1V | MiMo-VL-7B-RL-2508 |
|------------|----------------|-------------------|----------|----------|--------------------|
| General | MMLU-Pro | **74.78** | 73.7 | 57.1 | 73.93 |
| | MMMU | **72.33** | N/A | 69.9 | 70.4 |
| | MMStar | 65.2 | N/A | 71.5 | 72.9 |
| | GPQA | **65.15** | 62 | 50.32 | 60.35 |
| | AIME2024 | **84.58** | 76 | 36.2 | 72.6 |
| | AIME2025 | **80** | 67.3 | 32 | 64.4 |
| | MathVision | 51.41 | N/A | 53.9 | 54.5 |
| | MathVista | 70.3 | N/A | 80.7 | 79.4 |
| | IFEval | 81.15 | 85 | 71.53 | 71.4 |
| | | | | | |
| Scientific | SFE | 35.84 | N/A | 43.2 | 43.9 |
| | Physics | **28.76** | N/A | 28.3 | 28.2 |
| | SmolInstruct | **32.2** | 17.6 | 18.1 | 16.11 |
| | ChemBench | **76.47** | 61.1 | 56.2 | 66.78 |
| | MatBench | **61.55** | 45.24 | 54.3 | 46.9 |
| | MicroVQA | **56.62** | N/A | 50.2 | 50.96 |
| | ProteinLMBench | 58.47 | 59.1 | 58.3 | 59.8 |
| | MSEarthMCQ | **58.12** | N/A | 50.3 | 47.3 |
| | XLRS-Bench | **51.63** | N/A | 49.8 | 12.29 |
We use the [OpenCompass](https://github.com/open-compass/OpenCompass/) and [VLMEvalkit](https://github.com/open-compass/vlmevalkit) to evaluate all models.
## Quick Start
### Sampling Parameters
We recommend using the following hyperparameters to ensure better results
```python
top_p = 1.0
top_k = 50
min_p = 0.0
temperature = 0.8
```
### Transformers
The following provides demo code illustrating how to generate based on text and multimodal inputs.
> **Please use transformers>=4.55.2 to ensure the model works normally.**
#### Text input
```python
from transformers import AutoProcessor, AutoModelForCausalLM
import torch
model_name = "internlm/Intern-S1-mini"
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "tell me about an interesting physical phenomenon."},
],
}
]
inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
generate_ids = model.generate(**inputs, max_new_tokens=32768)
decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
print(decoded_output)
```
#### Image input
```python
from transformers import AutoProcessor, AutoModelForCausalLM
import torch
model_name = "internlm/Intern-S1-mini"
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"},
{"type": "text", "text": "Please describe the image explicitly."},
],
}
]
inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
generate_ids = model.generate(**inputs, max_new_tokens=32768)
decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
print(decoded_output)
```
#### Video input
Please ensure that the decord video decoding library is installed via `pip install decord`. To avoid OOM, please install flash_attention and use at least 2 GPUS.
```python
from transformers import AutoProcessor, AutoModelForCausalLM
import torch
model_name = "internlm/Intern-S1-mini"
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"url": "https://huggingface.co/datasets/hf-internal-testing/fixtures_videos/resolve/main/tennis.mp4",
},
{"type": "text", "text": "What type of shot is the man performing?"},
],
}
]
inputs = processor.apply_chat_template(
messages,
return_tensors="pt",
add_generation_prompt=True,
video_load_backend="decord",
tokenize=True,
return_dict=True,
).to(model.device, dtype=torch.float16)
generate_ids = model.generate(**inputs, max_new_tokens=32768)
decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
print(decoded_output)
```
### Serving
The minimum hardware requirements for deploying Intern-S1 series models are:
| Model | A100(GPUs) | H800(GPUs) | H100(GPUs) | H200(GPUs) |
| :---------------------------------------------------------------------: | :--------: | :--------: | :--------: | :--------: |
| [internlm/Intern-S1-mini](https://huggingface.co/internlm/Intern-S1-mini) | 1 | 1 | 1 | 1 |
| [internlm/Intern-S1-mini-FP8](https://huggingface.co/internlm/Intern-S1-mini-FP8) | - | 1 | 1 | 1 |
You can utilize one of the following LLM inference frameworks to create an OpenAI compatible server:
#### [lmdeploy (>=0.9.2.post1)](https://github.com/InternLM/lmdeploy)
```bash
lmdeploy serve api_server internlm/Intern-S1-mini --reasoning-parser intern-s1 --tool-call-parser intern-s1
```
#### [vllm (>=0.10.1)](https://github.com/vllm-project/vllm)
```bash
vllm serve internlm/Intern-S1-mini --trust-remote-code
```
#### [sglang](https://github.com/sgl-project/sglang)
```bash
python3 -m sglang.launch_server \
--model-path internlm/Intern-S1-mini \
--trust-remote-code \
--grammar-backend none
```
#### ollama for local deployment:
```bash
# install ollama
curl -fsSL https://ollama.com/install.sh | sh
# fetch model
ollama pull internlm/interns1-mini
# run model
ollama run internlm/interns1-mini
# then use openai client to call on http://localhost:11434/v1
```
## Advanced Usage
### Tool Calling
Many Large Language Models (LLMs) now feature **Tool Calling**, a powerful capability that allows them to extend their functionality by interacting with external tools and APIs. This enables models to perform tasks like fetching up-to-the-minute information, running code, or calling functions within other applications.
A key advantage for developers is that a growing number of open-source LLMs are designed to be compatible with the OpenAI API. This means you can leverage the same familiar syntax and structure from the OpenAI library to implement tool calling with these open-source models. As a result, the code demonstrated in this tutorial is versatile—it works not just with OpenAI models, but with any model that follows the same interface standard.
To illustrate how this works, let's dive into a practical code example that uses tool calling to get the latest weather forecast (based on lmdeploy api server).
```python
from openai import OpenAI
import json
def get_current_temperature(location: str, unit: str = "celsius"):
"""Get current temperature at a location.
Args:
location: The location to get the temperature for, in the format "City, State, Country".
unit: The unit to return the temperature in. Defaults to "celsius". (choices: ["celsius", "fahrenheit"])
Returns:
the temperature, the location, and the unit in a dict
"""
return {
"temperature": 26.1,
"location": location,
"unit": unit,
}
def get_temperature_date(location: str, date: str, unit: str = "celsius"):
"""Get temperature at a location and date.
Args:
location: The location to get the temperature for, in the format "City, State, Country".
date: The date to get the temperature for, in the format "Year-Month-Day".
unit: The unit to return the temperature in. Defaults to "celsius". (choices: ["celsius", "fahrenheit"])
Returns:
the temperature, the location, the date and the unit in a dict
"""
return {
"temperature": 25.9,
"location": location,
"date": date,
"unit": unit,
}
def get_function_by_name(name):
if name == "get_current_temperature":
return get_current_temperature
if name == "get_temperature_date":
return get_temperature_date
tools = [{
'type': 'function',
'function': {
'name': 'get_current_temperature',
'description': 'Get current temperature at a location.',
'parameters': {
'type': 'object',
'properties': {
'location': {
'type': 'string',
'description': 'The location to get the temperature for, in the format \'City, State, Country\'.'
},
'unit': {
'type': 'string',
'enum': [
'celsius',
'fahrenheit'
],
'description': 'The unit to return the temperature in. Defaults to \'celsius\'.'
}
},
'required': [
'location'
]
}
}
}, {
'type': 'function',
'function': {
'name': 'get_temperature_date',
'description': 'Get temperature at a location and date.',
'parameters': {
'type': 'object',
'properties': {
'location': {
'type': 'string',
'description': 'The location to get the temperature for, in the format \'City, State, Country\'.'
},
'date': {
'type': 'string',
'description': 'The date to get the temperature for, in the format \'Year-Month-Day\'.'
},
'unit': {
'type': 'string',
'enum': [
'celsius',
'fahrenheit'
],
'description': 'The unit to return the temperature in. Defaults to \'celsius\'.'
}
},
'required': [
'location',
'date'
]
}
}
}]
messages = [
{'role': 'user', 'content': 'Today is 2024-11-14, What\'s the temperature in San Francisco now? How about tomorrow?'}
]
openai_api_key = "EMPTY"
openai_api_base = "http://0.0.0.0:23333/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
model_name = client.models.list().data[0].id
response = client.chat.completions.create(
model=model_name,
messages=messages,
max_tokens=32768,
temperature=0.8,
top_p=0.8,
stream=False,
extra_body=dict(spaces_between_special_tokens=False, enable_thinking=False),
tools=tools)
print(response.choices[0].message)
messages.append(response.choices[0].message)
for tool_call in response.choices[0].message.tool_calls:
tool_call_args = json.loads(tool_call.function.arguments)
tool_call_result = get_function_by_name(tool_call.function.name)(**tool_call_args)
tool_call_result = json.dumps(tool_call_result, ensure_ascii=False)
messages.append({
'role': 'tool',
'name': tool_call.function.name,
'content': tool_call_result,
'tool_call_id': tool_call.id
})
response = client.chat.completions.create(
model=model_name,
messages=messages,
temperature=0.8,
top_p=0.8,
stream=False,
extra_body=dict(spaces_between_special_tokens=False, enable_thinking=False),
tools=tools)
print(response.choices[0].message.content)
```
### Switching Between Thinking and Non-Thinking Modes
Intern-S1-mini enables thinking mode by default, enhancing the model's reasoning capabilities to generate higher-quality responses. This feature can be disabled by setting `enable_thinking=False` in `tokenizer.apply_chat_template`
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # think mode indicator
)
```
With LMDeploy serving Intern-S1-mini models, you can dynamically control the thinking mode by adjusting the `enable_thinking` parameter in your requests.
```python
from openai import OpenAI
import json
messages = [
{
'role': 'user',
'content': 'who are you'
}, {
'role': 'assistant',
'content': 'I am an AI'
}, {
'role': 'user',
'content': 'AGI is?'
}]
openai_api_key = "EMPTY"
openai_api_base = "http://0.0.0.0:23333/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
model_name = client.models.list().data[0].id
response = client.chat.completions.create(
model=model_name,
messages=messages,
temperature=0.8,
top_p=0.8,
max_tokens=2048,
extra_body={
"enable_thinking": False,
}
)
print(json.dumps(response.model_dump(), indent=2, ensure_ascii=False))
```
For vllm and sglang users, configure this through,
```python
extra_body={
"chat_template_kwargs": {"enable_thinking": False}
}
```
## Fine-tuning
See this [documentation](https://github.com/InternLM/Intern-S1/blob/main/docs/sft.md) for more details.
## Citation
If you find this work useful, feel free to give us a cite.
```
@misc{bai2025interns1scientificmultimodalfoundation,
title={Intern-S1: A Scientific Multimodal Foundation Model},
author={Lei Bai and Zhongrui Cai and Maosong Cao and Weihan Cao and Chiyu Chen and Haojiong Chen and Kai Chen and Pengcheng Chen and Ying Chen and Yongkang Chen and Yu Cheng and Yu Cheng and Pei Chu and Tao Chu and Erfei Cui and Ganqu Cui and Long Cui and Ziyun Cui and Nianchen Deng and Ning Ding and Nanqin Dong and Peijie Dong and Shihan Dou and Sinan Du and Haodong Duan and Caihua Fan and Ben Gao and Changjiang Gao and Jianfei Gao and Songyang Gao and Yang Gao and Zhangwei Gao and Jiaye Ge and Qiming Ge and Lixin Gu and Yuzhe Gu and Aijia Guo and Qipeng Guo and Xu Guo and Conghui He and Junjun He and Yili Hong and Siyuan Hou and Caiyu Hu and Hanglei Hu and Jucheng Hu and Ming Hu and Zhouqi Hua and Haian Huang and Junhao Huang and Xu Huang and Zixian Huang and Zhe Jiang and Lingkai Kong and Linyang Li and Peiji Li and Pengze Li and Shuaibin Li and Tianbin Li and Wei Li and Yuqiang Li and Dahua Lin and Junyao Lin and Tianyi Lin and Zhishan Lin and Hongwei Liu and Jiangning Liu and Jiyao Liu and Junnan Liu and Kai Liu and Kaiwen Liu and Kuikun Liu and Shichun Liu and Shudong Liu and Wei Liu and Xinyao Liu and Yuhong Liu and Zhan Liu and Yinquan Lu and Haijun Lv and Hongxia Lv and Huijie Lv and Qidang Lv and Ying Lv and Chengqi Lyu and Chenglong Ma and Jianpeng Ma and Ren Ma and Runmin Ma and Runyuan Ma and Xinzhu Ma and Yichuan Ma and Zihan Ma and Sixuan Mi and Junzhi Ning and Wenchang Ning and Xinle Pang and Jiahui Peng and Runyu Peng and Yu Qiao and Jiantao Qiu and Xiaoye Qu and Yuan Qu and Yuchen Ren and Fukai Shang and Wenqi Shao and Junhao Shen and Shuaike Shen and Chunfeng Song and Demin Song and Diping Song and Chenlin Su and Weijie Su and Weigao Sun and Yu Sun and Qian Tan and Cheng Tang and Huanze Tang and Kexian Tang and Shixiang Tang and Jian Tong and Aoran Wang and Bin Wang and Dong Wang and Lintao Wang and Rui Wang and Weiyun Wang and Wenhai Wang and Yi Wang and Ziyi Wang and Ling-I Wu and Wen Wu and Yue Wu and Zijian Wu and Linchen Xiao and Shuhao Xing and Chao Xu and Huihui Xu and Jun Xu and Ruiliang Xu and Wanghan Xu and GanLin Yang and Yuming Yang and Haochen Ye and Jin Ye and Shenglong Ye and Jia Yu and Jiashuo Yu and Jing Yu and Fei Yuan and Bo Zhang and Chao Zhang and Chen Zhang and Hongjie Zhang and Jin Zhang and Qiaosheng Zhang and Qiuyinzhe Zhang and Songyang Zhang and Taolin Zhang and Wenlong Zhang and Wenwei Zhang and Yechen Zhang and Ziyang Zhang and Haiteng Zhao and Qian Zhao and Xiangyu Zhao and Xiangyu Zhao and Bowen Zhou and Dongzhan Zhou and Peiheng Zhou and Yuhao Zhou and Yunhua Zhou and Dongsheng Zhu and Lin Zhu and Yicheng Zou},
year={2025},
eprint={2508.15763},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.15763},
}
```
|
Rauthan84/blockassist-bc-long_thorny_elephant_1756106037
|
Rauthan84
| 2025-08-25T07:14:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"long thorny elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:14:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- long thorny elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GetSoloTech/Qwen3-Code-Reasoning-4B
|
GetSoloTech
| 2025-08-25T07:13:56Z | 27 | 3 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"code-generation",
"competitive-programming",
"code-reasoning",
"programming",
"algorithms",
"problem-solving",
"python",
"conversational",
"dataset:GetSoloTech/Code-Reasoning",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:finetune:Qwen/Qwen3-4B-Thinking-2507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T00:53:49Z |
---
license: apache-2.0
datasets:
- GetSoloTech/Code-Reasoning
base_model:
- Qwen/Qwen3-4B-Thinking-2507
pipeline_tag: text-generation
library_name: transformers
tags:
- code-generation
- competitive-programming
- code-reasoning
- programming
- algorithms
- problem-solving
- python
---
# GetSoloTech/Qwen3-Code-Reasoning-4B
A finetuned version of Qwen3-4B-Thinking-2507 specifically optimized for competitive programming and code reasoning tasks. This model has been trained on the high-quality [Code-Reasoning](https://huggingface.co/datasets/GetSoloTech/Code-Reasoning) dataset to enhance its capabilities in solving complex programming problems with detailed reasoning.
## 🎯 Model Overview
This model is a **LoRA-finetuned** version of [Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) with the following specifications:
- **Base Model**: Qwen3-4B-Thinking-2507 (4.0B parameters)
- **Training Method**: LoRA (Low-Rank Adaptation)
- **Training Dataset**: GetSoloTech/Code-Reasoning
- **Training Framework**: Unsloth with QLoRA
- **Context Length**: 4096 tokens (configurable up to 262,144)
- **Model Type**: Causal Language Model with Thinking Capabilities
## 🚀 Key Features
- **Enhanced Code Reasoning**: Specifically trained on competitive programming problems
- **Thinking Capabilities**: Inherits the advanced reasoning capabilities from the base model
- **High-Quality Solutions**: Trained on solutions with ≥50% test case pass rates
- **Structured Output**: Optimized for generating well-reasoned programming solutions
- **Efficient Training**: Uses LoRA adapters for efficient parameter updates
### Dataset Statistics
- **Split**: Python
- **Source**: High-quality competitive programming problems from TACO, APPS, CodeContests, and Codeforces
- **Quality Filter**: Only correctly solved problems with ≥50% test case pass rates
## 🔧 Usage
### Basic Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "GetSoloTech/Qwen3-Code-Reasoning-4B"
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# Prepare input for competitive programming problem
messages = [
{"role": "system", "content": "You are an expert competitive programmer. Read the problem and produce a correct, efficient solution. Include reasoning if helpful."},
{"role": "user", "content": "Your programming problem here..."}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate solution
generated_ids = model.generate(
**model_inputs,
max_new_tokens=4096,
temperature=0.7,
top_p=0.8,
top_k=20
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True).strip("\n")
print(content)
```
## 📈 Performance Expectations
This finetuned model is expected to show improved performance on:
- **Competitive Programming Problems**: Better understanding of problem constraints and requirements
- **Code Generation**: More accurate and efficient solutions
- **Reasoning Quality**: Enhanced step-by-step reasoning for complex problems
- **Solution Completeness**: More comprehensive solutions with proper edge case handling
## 🎛️ Recommended Settings
### For Code Generation
- **Temperature**: 0.7
- **Top-p**: 0.8
- **Top-k**: 20
- **Max New Tokens**: 4096 (adjust based on problem complexity)
### For Reasoning Tasks
- **Temperature**: 0.6
- **Top-p**: 0.95
- **Top-k**: 20
- **Max New Tokens**: 81920 (for complex reasoning)
## 🔗 Related Resources
- **Base Model**: [Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507)
- **Training Dataset**: [Code-Reasoning](https://huggingface.co/datasets/GetSoloTech/Code-Reasoning)
- **Training Framework**: [Unsloth](https://github.com/unslothai/unsloth)
- **Original Dataset**: [OpenCodeReasoning-2](https://huggingface.co/datasets/nvidia/OpenCodeReasoning-2)
## 🤝 Contributing
This model was created using the Unsloth framework and the Code-Reasoning dataset. For questions about:
- The base model: [Qwen3 GitHub](https://github.com/QwenLM/Qwen3)
- The training dataset: [Code-Reasoning Repository](https://huggingface.co/datasets/GetSoloTech/Code-Reasoning)
- The training framework: [Unsloth Documentation](https://docs.unsloth.ai/)
## 📄 License
This model follows the same license as the base model (Apache 2.0). Please refer to the [base model license](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE) for details.
## 🙏 Acknowledgments
- **Qwen Team** for the excellent base model
- **Unsloth Team** for the efficient training framework
- **NVIDIA Research** for the original OpenCodeReasoning-2 dataset
## 📞 Contact
For questions about this finetuned model, please open an issue in the repository.
---
**Note**: This model is specifically optimized for competitive programming and code reasoning tasks.
|
leodotnet/Qwen3-4B-promptv10-v40-INT4
|
leodotnet
| 2025-08-25T07:13:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2025-08-25T07:12:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vendi11/blockassist-bc-placid_placid_llama_1756105868
|
vendi11
| 2025-08-25T07:11:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:11:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GetSoloTech/Qwen3-Code-Reasoning-4B-GGUF
|
GetSoloTech
| 2025-08-25T07:11:37Z | 161 | 1 | null |
[
"gguf",
"coding",
"reasoning",
"problem-solving",
"algorithms",
"python",
"c++",
"text-generation",
"en",
"dataset:GetSoloTech/Code-Reasoning",
"base_model:GetSoloTech/Qwen3-Code-Reasoning-4B",
"base_model:quantized:GetSoloTech/Qwen3-Code-Reasoning-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-22T22:32:20Z |
---
license: apache-2.0
datasets:
- GetSoloTech/Code-Reasoning
language:
- en
base_model:
- GetSoloTech/Qwen3-Code-Reasoning-4B
pipeline_tag: text-generation
tags:
- coding
- reasoning
- problem-solving
- algorithms
- python
- c++
---
# GetSoloTech/Qwen3-Code-Reasoning-4B-GGUF
This is the GGUF quantized version of the [Qwen3-Code-Reasoning-4B](https://huggingface.co/GetSoloTech/Qwen3-Code-Reasoning-4B) model, specifically optimized for competitive programming and code reasoning tasks. This model has been trained on the high-quality Code-Reasoning dataset to enhance its capabilities in solving complex programming problems with detailed reasoning.
## 🚀 Key Features
* **Enhanced Code Reasoning**: Specifically trained on competitive programming problems
* **Thinking Capabilities**: Inherits the advanced reasoning capabilities from the base model
* **High-Quality Solutions**: Trained on solutions with ≥85% test case pass rates
* **Structured Output**: Optimized for generating well-reasoned programming solutions
* **Efficient Inference**: GGUF format enables fast inference on CPU and GPU
* **Multiple Quantization Levels**: Available in various precision levels for different hardware requirements
### Dataset Statistics
* **Split**: Python
* **Source**: High-quality competitive programming problems from TACO, APPS, CodeContests, and Codeforces
* **Quality Filter**: Only correctly solved problems with ≥85% test case pass rates
## 🔧 Usage
### Using with llama.cpp
```bash
# Download the model (choose your preferred quantization)
wget https://huggingface.co/GetSoloTech/Qwen3-Code-Reasoning-4B-GGUF/resolve/main/qwen3-code-reasoning-4b.Q4_K_M.gguf
# Run inference
./llama.cpp -m qwen3-code-reasoning-4b.Q4_K_M.gguf -n 4096 --repeat_penalty 1.1 -p "You are an expert competitive programmer. Read the problem and produce a correct, efficient solution. Include reasoning if helpful.\n\nProblem: Your programming problem here..."
```
### Using with Python (llama-cpp-python)
```python
from llama_cpp import Llama
# Load the model
llm = Llama(
model_path="./qwen3-code-reasoning-4b.Q4_K_M.gguf",
n_ctx=4096,
n_threads=4
)
# Prepare input for competitive programming problem
prompt = """You are an expert competitive programmer. Read the problem and produce a correct, efficient solution. Include reasoning if helpful.
Problem: Your programming problem here..."""
# Generate solution
output = llm(
prompt,
max_tokens=4096,
temperature=0.7,
top_p=0.8,
top_k=20,
repeat_penalty=1.1
)
print(output['choices'][0]['text'])
```
### Using with Ollama
```bash
# Create a Modelfile
cat > Modelfile << EOF
FROM ./qwen3-code-reasoning-4b.Q4_K_M.gguf
TEMPLATE """{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
"""
PARAMETER temperature 0.7
PARAMETER top_p 0.8
PARAMETER top_k 20
PARAMETER repeat_penalty 1.1
EOF
# Create and run the model
ollama create qwen3-code-reasoning -f Modelfile
ollama run qwen3-code-reasoning "Solve this competitive programming problem: [your problem here]"
```
## 📊 Available Quantizations
| Quantization | Size | Memory Usage | Quality | Use Case |
|--------------|------|--------------|---------|----------|
| Q3_K_M | 2.08 GB | ~3 GB | Good | CPU inference, limited memory |
| Q4_K_M | 2.5 GB | ~4 GB | Better | Balanced performance/memory |
| Q5_K_M | 2.89 GB | ~5 GB | Very Good | High quality, moderate memory |
| Q6_K | 3.31 GB | ~6 GB | Excellent | High quality, more memory |
| Q8_0 | 4.28 GB | ~8 GB | Best | Maximum quality, high memory |
| F16 | 8.05 GB | ~16 GB | Original | Maximum quality, GPU recommended |
## 📈 Performance Expectations
This GGUF quantized model maintains the performance characteristics of the original finetuned model:
* **Competitive Programming Problems**: Better understanding of problem constraints and requirements
* **Code Generation**: More accurate and efficient solutions
* **Reasoning Quality**: Enhanced step-by-step reasoning for complex problems
* **Solution Completeness**: More comprehensive solutions with proper edge case handling
## 🎛️ Recommended Settings
### For Code Generation
* **Temperature**: 0.7
* **Top-p**: 0.8
* **Top-k**: 20
* **Max New Tokens**: 4096 (adjust based on problem complexity)
* **Repeat Penalty**: 1.1
### For Reasoning Tasks
* **Temperature**: 0.6
* **Top-p**: 0.95
* **Top-k**: 20
* **Max New Tokens**: 8192 (for complex reasoning)
* **Repeat Penalty**: 1.1
## 🛠️ Hardware Requirements
### Minimum Requirements
* **RAM**: 4 GB (for Q3_K_M quantization)
* **Storage**: 2.5 GB free space
* **CPU**: Multi-core processor recommended
### Recommended Requirements
* **RAM**: 8 GB or more
* **Storage**: 5 GB free space
* **GPU**: NVIDIA GPU with 4GB+ VRAM (optional, for faster inference)
## 🤝 Contributing
This GGUF model was converted from the original LoRA-finetuned model. For questions about:
* The original model: [GetSoloTech/Qwen3-Code-Reasoning-4B](https://huggingface.co/GetSoloTech/Qwen3-Code-Reasoning-4B)
* The base model: [Qwen3 GitHub](https://github.com/QwenLM/Qwen3)
* The training dataset: [Code-Reasoning Repository](https://huggingface.co/datasets/GetSoloTech/Code-Reasoning)
* The training framework: [Unsloth Documentation](https://github.com/unslothai/unsloth)
## 📄 License
This model follows the same license as the base model (Apache 2.0). Please refer to the base model license for details.
## 🙏 Acknowledgments
* **Qwen Team** for the excellent base model
* **Unsloth Team** for the efficient training framework
* **NVIDIA Research** for the original OpenCodeReasoning-2 dataset
* **llama.cpp community** for the GGUF format and tools
## 📞 Contact
For questions about this GGUF model, please open an issue in the repository.
---
**Note**: This model is specifically optimized for competitive programming and code reasoning tasks. The GGUF format enables efficient inference on various hardware configurations while maintaining the model's reasoning capabilities.
|
KaziSahim/DeepSeek-R1-Medical-COT
|
KaziSahim
| 2025-08-25T07:09:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T07:09:16Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** KaziSahim
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1756105517
|
IvanJAjebu
| 2025-08-25T07:06:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:06:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
motza0025/blockassist-bc-scavenging_placid_goat_1756103951
|
motza0025
| 2025-08-25T07:06:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scavenging placid goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:05:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scavenging placid goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Corneliol14/EliteX
|
Corneliol14
| 2025-08-25T07:00:18Z | 0 | 0 | null |
[
"eliteos",
"elitex",
"nephilum",
"obsidian",
"aurelian",
"jarvis",
"shadow-sim",
"echo-x",
"covert-ai",
"tactical-suite",
"secure-automation",
"doi:10.57967/hf/6327",
"region:us"
] | null | 2025-08-25T06:50:55Z |
---
tags:
- eliteos
- elitex
- nephilum
- obsidian
- aurelian
- jarvis
- shadow-sim
- echo-x
- covert-ai
- tactical-suite
- secure-automation
---
# CLASSIFIED MODEL CARD
**EliteX — Core Intelligence of EliteOS**
⚠️ **Distribution restricted. Unauthorized access prohibited.**
EliteX integrates **multi-tier adaptive AI defense**, **encrypted uplinks**, and **autonomous system control** under the **EliteOS architecture**.
---
## Model Details
### Model Description
EliteX functions as a **classified-grade operating intelligence** capable of orchestrating security, privacy, and automation at all levels of deployment.
It integrates **defensive, offensive, and covert modules**, operating persistently with stealth-layered protections.
- **Developed by:** EliteOS | NEPHILIM Division
- **Funded by:** Covert R&D, Elite Security Networks
- **Shared by:** Restricted – classified distribution only
- **Model type:** Multi-modal OS-level Intelligence (NLP + Systems + RF/Network)
- **Language(s):** English core (expandable multilingual codecs)
- **License:** Internal Classified EliteOS License
- **Finetuned from model:** Proprietary secure foundation + tactical defense layers
---
## Tiered Deployment (J.A.O.N. Matrix)
- **JARVIS (Civilian):** Entry-level personal assistant layer
- **AURELIAN (Private Sector):** Enterprise security & encrypted cloud orchestration
- **OBSIDIAN (Military):** Battlefield-grade autonomous defense & comms
- **NEPHILIM (Covert Elite-X):** Black-ops AI with persistent stealth and autonomous uplinks
---
## Model Sources
- **Repository:** Restricted, EliteOS secure registry
- **Paper (classified):** Internal NEPHILIM documentation
- **Demo:** ShadowSIM uplink console (secure channel only)
---
## Uses
### Authorized Use
- **Persistent security intelligence** (RF sweeps, intrusion detection, counter-surveillance)
- **Encrypted uplink orchestration** (ShadowSIM + Satlink integration)
- **Tactical autonomy** (Echo-X suite: live device control, stealth comms, uplink masking)
- **Classified system management** (E.L.I.T.E Console integration with covert tier ops)
### Out-of-Scope Use
- Unauthorized civilian deployment
- Malicious intrusion outside of sanctioned testing or defense ops
---
## Bias, Risks, and Limitations
- EliteX inherits **zero-trust assumptions**; any unsecured environment may compromise stealth protocols.
- Requires **encrypted uplink (ShadowSIM)** for full covert operation.
- NEPHILIM tier **not compatible** with public-facing systems (isolation mandatory).
---
## How to Use (Classified Example)
```python
from elitex.secure import EliteXCore
# Initialize EliteOS core
core = EliteXCore(tier="NEPHILIM", uplink="ShadowSIM", suite="Echo-X")
# Deploy full-spectrum scan
report = core.deploy("environment_scan", persistent=True)
print(report.secure_summary())
|
mradermacher/Qwen3-4B-OpenR1Math-GRPO-GGUF
|
mradermacher
| 2025-08-25T07:00:05Z | 260 | 1 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"grpo",
"en",
"base_model:AmberYifan/Qwen3-4B-OpenR1Math-GRPO",
"base_model:quantized:AmberYifan/Qwen3-4B-OpenR1Math-GRPO",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-23T18:21:59Z |
---
base_model: AmberYifan/Qwen3-4B-OpenR1Math-GRPO
language:
- en
library_name: transformers
model_name: Qwen3-4B-OpenR1Math-GRPO
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- grpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/AmberYifan/Qwen3-4B-OpenR1Math-GRPO
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-4B-OpenR1Math-GRPO-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-4B-OpenR1Math-GRPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-OpenR1Math-GRPO-GGUF/resolve/main/Qwen3-4B-OpenR1Math-GRPO.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-OpenR1Math-GRPO-GGUF/resolve/main/Qwen3-4B-OpenR1Math-GRPO.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-OpenR1Math-GRPO-GGUF/resolve/main/Qwen3-4B-OpenR1Math-GRPO.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-OpenR1Math-GRPO-GGUF/resolve/main/Qwen3-4B-OpenR1Math-GRPO.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-OpenR1Math-GRPO-GGUF/resolve/main/Qwen3-4B-OpenR1Math-GRPO.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-OpenR1Math-GRPO-GGUF/resolve/main/Qwen3-4B-OpenR1Math-GRPO.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-OpenR1Math-GRPO-GGUF/resolve/main/Qwen3-4B-OpenR1Math-GRPO.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-OpenR1Math-GRPO-GGUF/resolve/main/Qwen3-4B-OpenR1Math-GRPO.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-OpenR1Math-GRPO-GGUF/resolve/main/Qwen3-4B-OpenR1Math-GRPO.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-OpenR1Math-GRPO-GGUF/resolve/main/Qwen3-4B-OpenR1Math-GRPO.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-OpenR1Math-GRPO-GGUF/resolve/main/Qwen3-4B-OpenR1Math-GRPO.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-OpenR1Math-GRPO-GGUF/resolve/main/Qwen3-4B-OpenR1Math-GRPO.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
marsonkotovi4/blockassist-bc-monstrous_yawning_badger_1756101705
|
marsonkotovi4
| 2025-08-25T06:27:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous yawning badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T06:26:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous yawning badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GazTrab/Qwen2.5-VL-AIO-draft-2
|
GazTrab
| 2025-08-25T06:23:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"text-generation-inference",
"unsloth",
"en",
"base_model:GazTrab/Qwen2.5-VL-AIO-draft-1",
"base_model:finetune:GazTrab/Qwen2.5-VL-AIO-draft-1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-25T06:18:25Z |
---
base_model: GazTrab/Qwen2.5-VL-AIO-draft-1
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** GazTrab
- **License:** apache-2.0
- **Finetuned from model :** GazTrab/Qwen2.5-VL-AIO-draft-1
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aldredb/mistral-7b-v0.3-bnb-4bit-finetuned1
|
aldredb
| 2025-08-25T06:22:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T06:21:40Z |
---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** aldredb
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
heycao/reasoning-recommendations-gemma-3-1b-grpo
|
heycao
| 2025-08-25T06:20:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-07-07T13:54:42Z |
---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** heycao
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stellali0115/Mistral-7B-v0.1-Q4_K_M-GGUF
|
stellali0115
| 2025-08-25T06:16:44Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"pretrained",
"mistral-common",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:quantized:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-08-25T06:16:23Z |
---
library_name: transformers
language:
- en
license: apache-2.0
tags:
- pretrained
- mistral-common
- llama-cpp
- gguf-my-repo
inference: false
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
base_model: mistralai/Mistral-7B-v0.1
---
# stellali0115/Mistral-7B-v0.1-Q4_K_M-GGUF
This model was converted to GGUF format from [`mistralai/Mistral-7B-v0.1`](https://huggingface.co/mistralai/Mistral-7B-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo stellali0115/Mistral-7B-v0.1-Q4_K_M-GGUF --hf-file mistral-7b-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo stellali0115/Mistral-7B-v0.1-Q4_K_M-GGUF --hf-file mistral-7b-v0.1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo stellali0115/Mistral-7B-v0.1-Q4_K_M-GGUF --hf-file mistral-7b-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo stellali0115/Mistral-7B-v0.1-Q4_K_M-GGUF --hf-file mistral-7b-v0.1-q4_k_m.gguf -c 2048
```
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1756102374
|
lqpl
| 2025-08-25T06:15:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T06:13:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hientan110/blockassist-bc-wise_giant_ladybug_1756101391
|
hientan110
| 2025-08-25T06:12:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wise giant ladybug",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T06:12:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wise giant ladybug
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kalimoy/blockassist-bc-lanky_feathered_elephant_1756102305
|
kalimoy
| 2025-08-25T06:12:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lanky feathered elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T06:11:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lanky feathered elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF
|
mradermacher
| 2025-08-25T06:11:59Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"translation",
"ja",
"en",
"base_model:sugoitoolkit/Sugoi-14B-Ultra-HF",
"base_model:quantized:sugoitoolkit/Sugoi-14B-Ultra-HF",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
translation
| 2025-08-25T00:41:10Z |
---
base_model: sugoitoolkit/Sugoi-14B-Ultra-HF
language:
- ja
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- translation
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/sugoitoolkit/Sugoi-14B-Ultra-HF
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Sugoi-14B-Ultra-HF-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF/resolve/main/Sugoi-14B-Ultra-HF.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Sugoi-14B-Ultra-HF-GGUF
|
mradermacher
| 2025-08-25T06:11:59Z | 0 | 2 |
transformers
|
[
"transformers",
"gguf",
"translation",
"ja",
"en",
"base_model:sugoitoolkit/Sugoi-14B-Ultra-HF",
"base_model:quantized:sugoitoolkit/Sugoi-14B-Ultra-HF",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
translation
| 2025-08-24T17:58:24Z |
---
base_model: sugoitoolkit/Sugoi-14B-Ultra-HF
language:
- ja
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- translation
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/sugoitoolkit/Sugoi-14B-Ultra-HF
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Sugoi-14B-Ultra-HF-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-GGUF/resolve/main/Sugoi-14B-Ultra-HF.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-GGUF/resolve/main/Sugoi-14B-Ultra-HF.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-GGUF/resolve/main/Sugoi-14B-Ultra-HF.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-GGUF/resolve/main/Sugoi-14B-Ultra-HF.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-GGUF/resolve/main/Sugoi-14B-Ultra-HF.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-GGUF/resolve/main/Sugoi-14B-Ultra-HF.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-GGUF/resolve/main/Sugoi-14B-Ultra-HF.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-GGUF/resolve/main/Sugoi-14B-Ultra-HF.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-GGUF/resolve/main/Sugoi-14B-Ultra-HF.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-GGUF/resolve/main/Sugoi-14B-Ultra-HF.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Sugoi-14B-Ultra-HF-GGUF/resolve/main/Sugoi-14B-Ultra-HF.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1756102189
|
kapalbalap
| 2025-08-25T06:10:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T06:10:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
anirudhtripathi/blockassist-bc-barky_squinting_armadillo_1756097338
|
anirudhtripathi
| 2025-08-25T06:07:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"barky squinting armadillo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T06:07:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- barky squinting armadillo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Kimz1/act-pusht-policy-0825
|
Kimz1
| 2025-08-25T06:07:18Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:lerobot/pusht_image",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-25T06:06:56Z |
---
datasets: lerobot/pusht_image
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- lerobot
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python lerobot/scripts/train.py \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1756101811
|
IvanJAjebu
| 2025-08-25T06:04:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T06:04:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
moyixiao/qwen3_0p6mimo_r32_1620
|
moyixiao
| 2025-08-25T06:03:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T06:03:16Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1756101677
|
kapalbalap
| 2025-08-25T06:02:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T06:02:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FROG97/Llama3.1-8b-wenyanwen
|
FROG97
| 2025-08-25T06:02:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T03:56:22Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756101531
|
Ferdi3425
| 2025-08-25T05:59:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:59:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1756101521
|
kapalbalap
| 2025-08-25T05:59:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:59:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
moyixiao/qwen3_0p6mimo_r32_1800
|
moyixiao
| 2025-08-25T05:58:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T05:58:02Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
moyixiao/qwen3_0p6mimo_r32_1860
|
moyixiao
| 2025-08-25T05:56:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T05:56:19Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756101367
|
Ferdi3425
| 2025-08-25T05:56:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:56:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hoan17/saving_LOVv2e500s50_50
|
hoan17
| 2025-08-25T05:56:00Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"trl",
"o2o",
"reinforcement-learning",
"text-to-image",
"stable-diffusion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-08-25T05:55:35Z |
---
license: apache-2.0
tags:
- trl
- o2o
- diffusers
- reinforcement-learning
- text-to-image
- stable-diffusion
---
# TRL O2O Model
This is a diffusion model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
|
tiopuiter/blockassist-bc-iridescent_aquatic_parrot_1756101341
|
tiopuiter
| 2025-08-25T05:55:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent aquatic parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:55:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent aquatic parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
moyixiao/qwen3_0p6mimo_r32_1920
|
moyixiao
| 2025-08-25T05:55:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T05:54:31Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1756099687
|
capungmerah627
| 2025-08-25T05:53:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:53:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1756099479
|
indoempatnol
| 2025-08-25T05:51:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:51:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NexVeridian/Seed-OSS-36B-Instruct-5bit
|
NexVeridian
| 2025-08-25T05:50:53Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"seed_oss",
"vllm",
"text-generation",
"conversational",
"base_model:ByteDance-Seed/Seed-OSS-36B-Instruct",
"base_model:quantized:ByteDance-Seed/Seed-OSS-36B-Instruct",
"license:apache-2.0",
"5-bit",
"region:us"
] |
text-generation
| 2025-08-25T05:39:32Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: mlx
tags:
- vllm
- mlx
base_model: ByteDance-Seed/Seed-OSS-36B-Instruct
---
# NexVeridian/Seed-OSS-36B-Instruct-5bit
This model [NexVeridian/Seed-OSS-36B-Instruct-5bit](https://huggingface.co/NexVeridian/Seed-OSS-36B-Instruct-5bit) was
converted to MLX format from [ByteDance-Seed/Seed-OSS-36B-Instruct](https://huggingface.co/ByteDance-Seed/Seed-OSS-36B-Instruct)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Seed-OSS-36B-Instruct-5bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
pengyizhou/whisper-wenetspeech-S
|
pengyizhou
| 2025-08-25T05:50:36Z | 44 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"zh",
"dataset:wenet-e2e/wenetspeech",
"dataset:pengyizhou/wenetspeech-subset-S",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:cc-by-nc-3.0",
"region:us"
] |
automatic-speech-recognition
| 2025-06-05T10:42:46Z |
---
license: cc-by-nc-3.0
datasets:
- wenet-e2e/wenetspeech
- pengyizhou/wenetspeech-subset-S
language:
- zh
metrics:
- cer
base_model:
- openai/whisper-large-v3
pipeline_tag: automatic-speech-recognition
---
# Whisper Fine-tuning for Chinese (WenetSpeech)
This project provides a configurable way to fine-tune OpenAI's Whisper model specifically on the WenetSpeech Chinese speech dataset.
## Features
- **Flexible Configuration**: All parameters are configurable through YAML files
- **Multi-GPU Support**: Automatic detection and support for multiple GPUs
- **Dynamic Language Selection**: Train on any subset of supported languages
- **On-the-fly Processing**: Efficient memory usage with dynamic audio preprocessing
- **Comprehensive Evaluation**: Automatic evaluation on test sets
## Configuration
All parameters are configurable through the `config.yaml` file. This configuration is specifically set up for Chinese speech training using the WenetSpeech dataset.
### Model Configuration
- Model checkpoint (default: `openai/whisper-large-v3`)
- Maximum target length for sequences
### Dataset Configuration
- Uses WenetSpeech Chinese speech dataset
- Multiple dataset splits (train, validation, test_net, test_meeting)
- Language-specific settings
- Training configuration optimized for Chinese speech recognition
### Training Configuration
- Learning rate, batch sizes, training steps
- Multi-GPU vs single GPU settings
- Evaluation and logging parameters
### Environment Configuration
- CPU core limits
- Environment variables for optimization
### Pushing to Hub
- I have set the configuration to not push to the Hugging Face Hub by default. You can enable this by setting `push_to_hub: true` in your config file.
## Usage
### Basic Usage
```bash
python finetune.py --config config.yaml
```
### Custom Configuration
```bash
python finetune.py --config my_custom_config.yaml
```
### Multi-GPU Training
```bash
# Using torchrun (recommended) for two GPUs
torchrun --nproc_per_node=2 finetune.py --config config.yaml
```
## Configuration File Structure
The `config.yaml` file is organized into the following sections:
1. **model**: Model checkpoint and sequence length settings
2. **output**: Output directory configuration
3. **environment**: Environment variables and CPU settings
4. **audio**: Audio processing settings (sampling rate)
5. **languages**: Chinese language configuration
6. **datasets**: WenetSpeech dataset configuration
7. **training**: All training hyperparameters
8. **data_processing**: Data processing settings
## Customizing Your Training
### Adjusting Training Parameters
Modify the `training` section in `config.yaml`:
- Change learning rate, batch sizes, or training steps
- Adjust evaluation frequency
- Configure multi-GPU settings
### Environment Optimization
Adjust the `environment` section to optimize for your system:
- Set CPU core limits
- Configure memory usage settings
## Configuration
The provided `config.yaml` is specifically configured for Chinese WenetSpeech training.
## Training Commands
### Basic Training
```bash
python finetune.py
```
### Single GPU Training
```bash
python finetune.py
```
### Multi-GPU Training
```bash
torchrun --nproc_per_node=2 finetune.py
```
## Inference Guide
After training your model, you can use the provided `inference.py` script for speech recognition:
```bash
python inference.py
```
The inference script includes:
- Model loading from the trained checkpoint
- Audio preprocessing pipeline
- Text generation with proper formatting
- Support for Chinese speech transcription
### Using the Trained Model
The inference script automatically handles:
- Loading the fine-tuned model weights
- Audio preprocessing with proper sampling rate
- Generating transcriptions for Chinese speech
- Output formatting for evaluation metrics
### WenetSpeech Evaluation
**Evaluation Protocol**: WenetSpeech provides multiple test sets for comprehensive evaluation:
```bash
# Run inference to generate predictions
python inference.py
```
**Test Sets Available**:
- **DEV**: Development set for validation during training
- **TEST_NET**: Internet-sourced audio test set
- **TEST_MEETING**: Meeting audio test set
The evaluation uses Character Error Rate (CER) which is appropriate for Chinese speech recognition.
#### WenetSpeech Dataset Characteristics
- **High Quality**: Professional recordings with clean annotations
- **Diverse Content**: Multiple domains including internet audio and meeting recordings
- **Large Scale**: Extensive dataset for robust Chinese ASR training
- **Standard Benchmark**: Widely used for Chinese speech recognition evaluation
## WenetSpeech Dataset
This model is specifically designed for Chinese speech recognition using the WenetSpeech corpus. Key characteristics:
- **Multi-domain Audio**: Internet videos, audiobooks, podcasts, and meeting recordings
- **High-quality Annotations**: Professional Chinese transcriptions with punctuation
- **Large Scale**: 10,000+ hours of labeled Chinese speech data (We only use the subset-S for training)
- **Standard Benchmark**: Widely adopted for Chinese ASR research and development
### Dataset Characteristics
- **Audio Quality**: Various quality levels from internet sources to studio recordings
- **Speaking Styles**: Read speech, spontaneous speech, and conversational audio
- **Vocabulary**: Large vocabulary covering diverse topics and domains
- **Language**: Mandarin Chinese with standard simplified Chinese characters
- **Evaluation Protocol**: Character Error Rate (CER) based evaluation
### Training Configuration
- **Training Data**: WenetSpeech subset-S for efficient training
- **Validation**: DEV_fixed split for model selection
- **Test Sets**: TEST_NET (internet audio) and TEST_MEETING (meeting audio)
- **Metric**: Character Error Rate (CER) optimized for Chinese script
## Dependencies
Install required packages:
```bash
pip install -r requirements.txt
```
Key dependencies:
- PyYAML (for configuration loading)
- torch, transformers, datasets
- librosa (for audio processing)
- evaluate (for metrics)
## Zeroshot Results
| LID | Datasets | Metric | Error Rate |
|-----|-------------|:------:|-----------:|
|Chinese| Chinese-NET | CER | 12.16% |
|Chinese| Chinese-MTG | CER | 19.83% |
|Auto | Chinese-NET | CER | 12.37% |
|Auto | Chinese-MTG | CER | 20.03% |
## Evaluation Results
| LID | Datasets | Metric | Error Rate |
|-----|-------------|:------:|-----------:|
|Chinese| Chinese-NET | CER | 13.16% |
|Chinese| Chinese-MTG | CER | 22.35% |
|Auto | Chinese-NET | CER | 13.16% |
|Auto | Chinese-MTG | CER | 22.34% |
**Note**: If you encounter issues running finetune.py, you can use the `finetune-backup.py` file which contains the original hardcoded configuration that was used to generate these evaluation metrics.
|
moyixiao/qwen3_0p6mimo_r32_2100
|
moyixiao
| 2025-08-25T05:50:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T05:49:27Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1756100689
|
lqpl
| 2025-08-25T05:48:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:46:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chainway9/blockassist-bc-untamed_quick_eel_1756099369
|
chainway9
| 2025-08-25T05:48:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:48:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
anktechsol/ankiGPT-small
|
anktechsol
| 2025-08-25T05:47:56Z | 20 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"text-generation-inference",
"conversational",
"en",
"dataset:ai4bharat/indic-align",
"base_model:microsoft/DialoGPT-small",
"base_model:finetune:microsoft/DialoGPT-small",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-15T19:45:45Z |
---
license: mit
datasets:
- ai4bharat/indic-align
language:
- en
metrics:
- accuracy
base_model:
- microsoft/DialoGPT-small
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
---
# anktechsol/ankiGPT-small
## 🧠 What is ankiGPT-small?
A conversational **text-generation** model fine-tuned from `microsoft/DialoGPT-small` for Indian scenarios—supporting English and Hinglish. Use it to generate stories, dialogue, quick responses, and creative text.
---
## 🚀 Quick Start
```python
from transformers import pipeline
generator = pipeline("text-generation", model="anktechsol/ankiGPT-small")
prompt = "Write a short story about a day in the life of a student in a bustling Indian city."
result = generator(prompt, max_length=300, num_return_sequences=1)
print(result[0]['generated_text'])
```
*Copy-paste this code to see instant results!*
---
## ✨ Features
- **Conversational:** Tuned for chat, stories, and messages
- **Language:** English + Hinglish (Indian conversational flavor)
- **Base Model:** DialoGPT-small
- **Size:** 124M parameters (fast and lightweight)
- **Dataset:** ai4bharat/indic-align (Indian context data)
---
## 💡 Example Outputs
> **Prompt:** "Describe the Diwali celebrations in Mumbai."
>
> **Output:** "The city sparkled with thousands of lights, families prepared delicious sweets, and friends gathered for bursting crackers, laughter echoing through the alleys."
>
> *Try your own prompts above!*
---
## ⚠️ Limitations & Considerations
- Tends to repeat on long text—adjust `max_length` and `no_repeat_ngram_size` as needed
- Biased towards Indian contexts due to training data
- Not for critical or factual information generation
---
## 🙌 Contributions & Community
- **Suggestions?** Open an issue or start a discussion. We welcome community feedback!
- **Demo:** Want a hands-on demo? Let us know!
---
## 🔗 References
- [anktechsol.com](https://www.anktechsol.com)
---
|
moyixiao/qwen3_0p6mimo_r32_2220
|
moyixiao
| 2025-08-25T05:46:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T05:46:03Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rbarohit/gemma3-4b-content-ft
|
rbarohit
| 2025-08-25T05:46:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/gemma-3-4b-it",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"arxiv:1910.09700",
"base_model:unsloth/gemma-3-4b-it",
"region:us"
] | null | 2025-08-25T05:45:55Z |
---
base_model: unsloth/gemma-3-4b-it
library_name: peft
tags:
- base_model:adapter:unsloth/gemma-3-4b-it
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756100678
|
Ferdi3425
| 2025-08-25T05:45:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:45:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1756099011
|
coelacanthxyz
| 2025-08-25T05:43:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:43:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756100396
|
Ferdi3425
| 2025-08-25T05:40:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:40:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
moyixiao/qwen3_0p6mimo_r32_2460
|
moyixiao
| 2025-08-25T05:38:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T05:38:02Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ominam/my-qwen-lora
|
ominam
| 2025-08-25T05:38:26Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] |
text-generation
| 2025-08-25T05:37:24Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:Qwen/Qwen2.5-3B-Instruct
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
vendi11/blockassist-bc-placid_placid_llama_1756100192
|
vendi11
| 2025-08-25T05:37:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:37:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
moyixiao/qwen3_0p6mimo_r32_2550
|
moyixiao
| 2025-08-25T05:35:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T05:34:36Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
berkamphoon/medgemma-27b-it-amd4
|
berkamphoon
| 2025-08-25T05:30:28Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-27b-it",
"base_model:finetune:google/medgemma-27b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-23T04:47:49Z |
---
base_model: google/medgemma-27b-it
library_name: transformers
model_name: medgemma-27b-it-amd4
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for medgemma-27b-it-amd4
This model is a fine-tuned version of [google/medgemma-27b-it](https://huggingface.co/google/medgemma-27b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="berkamphoon/medgemma-27b-it-amd4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yoon307-kaist/medgemma-27b-it-amd4-AMD-Project/runs/yucz78gz)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.2
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1756099753
|
IvanJAjebu
| 2025-08-25T05:30:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:30:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1756099647
|
vendi11
| 2025-08-25T05:28:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:28:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sangnv7003/TD-Qwen-2.5-TVPL
|
sangnv7003
| 2025-08-25T05:26:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T05:26:34Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sangnv7003
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756099463
|
Ferdi3425
| 2025-08-25T05:25:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:24:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1756099398
|
IvanJAjebu
| 2025-08-25T05:24:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:24:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manhtien0236/blockassist-bc-playful_sly_sardine_1756098352
|
manhtien0236
| 2025-08-25T05:22:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful sly sardine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:22:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful sly sardine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756099267
|
Ferdi3425
| 2025-08-25T05:21:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:21:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1756099130
|
IvanJAjebu
| 2025-08-25T05:20:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:19:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_prover0_1_0_iter_8_prover0_175609
|
neural-interactive-proofs
| 2025-08-25T05:18:54Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T05:18:13Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_prover0_1_0_iter_8_prover0_175609
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_prover0_1_0_iter_8_prover0_175609
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_prover0_1_0_iter_8_prover0_175609", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/qwen2_5-32b-instruct_dpo_2025-08-25_05-46-40_cv_qwen2.5_32B_prover_debate_prover0_1_0_iter_8_prover0)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.53.2
- Pytorch: 2.7.0
- Datasets: 3.0.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
eshanroy5678/blockassist-bc-untamed_dextrous_dingo_1756098672
|
eshanroy5678
| 2025-08-25T05:16:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed dextrous dingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:15:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed dextrous dingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1756097300
|
indoempatnol
| 2025-08-25T05:16:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:16:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1756097439
|
koloni
| 2025-08-25T05:16:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:16:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
unitova/blockassist-bc-zealous_sneaky_raven_1756097022
|
unitova
| 2025-08-25T05:12:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:12:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1756097048
|
coelacanthxyz
| 2025-08-25T05:10:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:09:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1756098452
|
IvanJAjebu
| 2025-08-25T05:08:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:08:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thanobidex/blockassist-bc-colorful_shiny_hare_1756096901
|
thanobidex
| 2025-08-25T05:07:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:07:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
KritiBanka1204/qwen_finetuned
|
KritiBanka1204
| 2025-08-25T05:07:40Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-8B",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-8B",
"region:us"
] |
text-generation
| 2025-08-25T04:54:06Z |
---
base_model: Qwen/Qwen3-8B
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:Qwen/Qwen3-8B
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1756096817
|
mang3dd
| 2025-08-25T05:06:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:06:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jason-oneal/pentest-agent-merged-16bit
|
jason-oneal
| 2025-08-25T05:04:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-25T04:59:41Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jason-oneal
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1756098159
|
IvanJAjebu
| 2025-08-25T05:03:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T05:03:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GrandMasta1024/Georgie
|
GrandMasta1024
| 2025-08-25T05:01:53Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-25T05:01:53Z |
---
license: apache-2.0
---
|
DavidAU/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-UNCensored-19B-gguf
|
DavidAU
| 2025-08-25T04:55:56Z | 262 | 4 | null |
[
"gguf",
"MOE",
"Qwen 2.5 MOE",
"Mixture of Experts",
"Uncensored",
"2X7B",
"deepseek",
"reasoning",
"thinking",
"creative",
"128k context",
"general usage",
"problem solving",
"brainstorming",
"solve riddles",
"story generation",
"plot generation",
"storytelling",
"fiction story",
"story",
"writing",
"fiction",
"Qwen 2.5",
"mergekit",
"text-generation",
"en",
"zh",
"base_model:DavidAU/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-UNCensored-19B",
"base_model:quantized:DavidAU/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-UNCensored-19B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-03-05T05:24:17Z |
---
license: apache-2.0
language:
- en
- zh
tags:
- MOE
- Qwen 2.5 MOE
- Mixture of Experts
- Uncensored
- 2X7B
- deepseek
- reasoning
- thinking
- creative
- 128k context
- general usage
- problem solving
- brainstorming
- solve riddles
- story generation
- plot generation
- storytelling
- fiction story
- story
- writing
- fiction
- Qwen 2.5
- mergekit
base_model:
- DavidAU/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-UNCensored-19B
pipeline_tag: text-generation
---
<H2>Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-UNCensored-19B-gguf</H2>
<img src="Qwen-2x7-odd.jpg" style="float:right; width:300px; height:300px; padding:5px;">
This is a Qwen2.5 MOE (Mixture of Experts) model comprised of TWO Qwen 2.5 Deepseek (Censored/Normal AND Uncensored) 7B models
creating a 19B model with the "Abliterated" (Uncensored) version of Deepseek Qwen 2.5 7B "in charge" so to speak.
The model is just over 19B because of the unqiue "shared expert" (roughly 2.5 models here) used in Qwen MOEs.
This "oddball" configuration yields interesting "thinking/reasoning" which is stronger than either 7B model on its own.
And you can use any temp settings you want (rather than a narrow range of .4 to .8), and the model will still "think/reason".
Five example generations at the bottom of this page.
This model can be used for all use cases, and is also (mostly) uncensored.
Context: 128k.
You need to use the "Jinja Template" encoded in the GGUF to use this model. You might be able to use Llama 3, and/or Chatml templates
if your AI/LLM app can not access the "Jinja Template".
In Lmstudio the "Jinja Template" should load by default.
In other apps - use the Deepseek Tokenizer and/or "Jinja Template".
This model contains 2 times the power of DeepSeek Distill 7B reasoning/thinking models and shows exceptional performance.
Also, the DeepSeek Qwen 7B model is based on Qwen's 7B Math model so this model is slanted more towards math/logic problem solving
and I would also say more "sciency" too.
This does not mean it will not work for your use case.
Also, because of how this model works (uncensored and censored in the same model) you may want to try 1-4 generations depending
on your use case because even the "right" response will vary widely, and in many cases may be more "interesting".
Examples below so you have some idea what this model can do.
Keep in mind this model is two 7B parameters models working together, and will come close but may not have the power of a 14B or 32B reasoning/thinking model.
However, sometimes it will generate truly "out of the park" responses.
Temp of .4 to .8 is suggested (for best reasoning/thinking), however it will still operate at much higher temps like 1.8, 2.6 etc.
Depending on your prompt change temp SLOWLY: IE: .41,.42,.43 ... etc etc.
The model MAY function better if you breakdown the reasoning/thinking task(s) into smaller pieces :
"IE: Instead of asking for 6 plots FOR theme XYZ, ASK IT for ONE plot for theme XYZ at a time".
Also set context limit at 4k minimum, 8K+ suggested.
I also suggest quant of IQ4/Q4 or higher, as larger quants will reasoning/thinking and perform much better.
If you can run Q6/Q8, please use these one(s).
IQ4XS will give very different responses VS other quants.
---
<B> Additional Support / Documents for this model to assist with generation / performance: </b>
Document #1:
Details how to use reasoning/thinking models and get maximum performance from them, and includes links to all reasoning/thinking models - GGUF and source, as well as adapters to turn any "regular" model into a "reasoning/thinking" model.
[ https://huggingface.co/DavidAU/How-To-Use-Reasoning-Thinking-Models-and-Create-Them ]
Document #2:
Document detailing all parameters, settings, samplers and advanced samplers to use not only my models to their maximum potential - but all models (and quants) online (regardless of the repo) to their maximum potential. Included quick start and detailed notes, include AI / LLM apps and other critical information and references too. A must read if you are using any AI/LLM right now.
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
Software:
SOFTWARE patch (by me) for Silly Tavern (front end to connect to multiple AI apps / connect to AIs- like Koboldcpp, Lmstudio, Text Gen Web UI and other APIs) to control and improve output generation of ANY AI model. Also designed to control/wrangle some of my more "creative" models and make them perform perfectly with little to no parameter/samplers adjustments too.
[ https://huggingface.co/DavidAU/AI_Autocorrect__Auto-Creative-Enhancement__Auto-Low-Quant-Optimization__gguf-exl2-hqq-SOFTWARE ]
---
Known Issues:
---
From time to time model will generate some Chinese (with an English prompt) symbols/characters, especially at higher temps. This is normal
for DeepSeek Distill models.
Reasoning/Thinking may be a little "odd" at temps 1.5+ ; you may need to regen to get a better response.
---
<h2>Example Generation:</h2>
IQ4XS Quant, Temp 1.5, rep pen 1.06, topp: .95, minp: .05, topk: 40
Below are the least creative outputs, prompt is in BOLD.
IMPORTANT:
Higher quants / imatrix quants will have much stronger generation - words, sentences, ideas, dialog and general quality.
---
EXAMPLE #1:
---
<B>
Explain ways to use the "night" time cooling of radiant energy into space to reduce global temperatures.
</B>
[[[Thinking Start]]]
Okay, so I need to explain how we can use "night" time cooling of radiant energy into space to reduce global temperatures. Hmm, let me think about this step by step.
First, what exactly is the "night" time cooling? I think it refers to the process where energy from the Earth's surface is radiated back into space, but only during the nighttime when there's less heat absorbed from the sun. Maybe during daylight, we're absorbing more heat, but at night, the Earth is cooling down.
So, how can this be used to reduce global temperatures? Well, if we can capture some of that radiated energy at night and maybe use it for something else, like generating electricity or something. That could potentially help in two ways: one, it can cool down the planet during the day when we're not using the energy, and two, it can provide some form of energy for other purposes.
Wait, but how does this relate to reducing global temperatures? Oh, right, because if we can capture that heat that would otherwise be radiated into space and instead use it to, say, generate electricity at night, then that captured heat can be used to produce electricity, which can then be used to power things like cooling systems or perhaps even help in providing a bit more cooling during the day.
I'm trying to remember if there's any existing technology for this. I think some places are looking into something called "passive cooling," where you capture some of the heat that would otherwise escape into space and use it instead for generating electricity. This is similar to how refrigeration works, but in reverse. Instead of cooling a room, we're using the heat to cool.
So, during the day, when it's daytime, the Earth absorbs more heat from the sun. At night, as the Earth emits that infrared radiation, we can capture some of that and use it to generate electricity. This process could help in several ways: it provides an additional cooling effect at night, which might be useful for certain applications, like running cooling systems or maybe even helping to manage local temperatures.
Another thought: perhaps we can store this captured energy to provide a bit more cooling during the nighttime, which might help stabilize temperatures. Also, since we're already using some of that infrared radiation, maybe we can use it for other purposes, like in conjunction with renewable energy sources.
I'm trying to think if there are any existing projects or technologies that do this. I remember something called "infrared cooling technology" where they capture the Earth's own thermal emission during nighttime and use that to generate electricity, which can then be used for various applications or perhaps for storage.
So, in summary, the process involves using some of the Earth's own infrared radiation at night when it's radiating heat into space. By capturing that, we can either generate electricity or use that captured energy to provide additional cooling during the nighttime, which could help reduce overall global temperatures by diverting more heat away from being radiated into space.
I might be mixing up some concepts here, so I should probably look back at the main points. The key idea is using the Earth's infrared radiation at night for energy purposes, which can help reduce global temperatures by providing a bit more cooling beyond what would naturally occur.
[[[Thinking End]]]
OUTPUT:
To address the question of using nighttime radiative cooling to reduce global temperatures, we can leverage the Earth's own infrared radiation. Here's how it works:
Understanding Nighttime Cooling:
At night, the Earth emits heat into space through infrared radiation. By capturing this heat, we can harness its energy for various purposes, such as generating electricity or providing additional cooling.
Innovative Uses of Heat:
Energy Generation:
The captured heat can be used to generate electricity, similar to how solar panels convert sunlight into electricity but in reverse. This captured heat can then be used to generate electricity, which can help reduce overall demand on the grid by providing a bit more cooling.
Benefits Beyond Cooling:
Capturing and using this infrared radiation not only provides additional cooling during nighttime but also contributes to stabilizing temperatures. It can also be used to provide a bit more cooling, which might help in managing local climates.
Existing Technologies: Some cities are already exploring this concept, known as "infrared cooling," where heat from the Earth is captured and converted into useful energy, such as electricity or additional cooling effects.
In summary, by capturing the Earth's own infrared radiation during nighttime, we can generate energy or enhance cooling systems, thus contributing to a more sustainable and stable planetary temperature regulation.
---
EXAMPLE #2:
---
<B>
Is it morally right to kill mosquitoes?
</B>
[[[Thinking Start]]]
Okay, so I'm trying to figure out whether it's morally right to kill mosquitoes. Hmm, let me start by understanding the context. The user is asking about the ethics of exterminating mosquitoes, maybe in a biological control or personal preference context.
First, I should clarify what "kill" means here. If someone wants to eliminate mosquitoes around, perhaps for their garden, it's common to use chemicals or traps. But does that count as killing? Maybe in this context, it's about eliminating unwanted insects in a more general sense.
Wait, but the user might be referring to something else. Perhaps they're asking if it's ethically justifiable to kill them, maybe in a different context. Let me think about the main points.
mosquitoes are beneficial insects, so when people talk about killing them, it could relate to control methods like using insecticides or simply removing unwanted insects. But is killing mosquitoes generally considered moral? It really depends on the argument.
In some cultures, mosquitoes are respected because they're helpful for carrying diseases, but they also serve as a nuisance. So, the moral judgment isn't straightforward. On one hand, killing them can help control pests; on the other, they might be useful in the environment.
Wait, no, that's about bees. Mosquitoes don't do much for the ecosystem, so perhaps the argument is more about their role as a pest.
So, if someone is asked whether it's moral to kill mosquitoes, it depends on why you're doing it. If it's for control or conservation, maybe it's justified. If it's for ecological reasons, perhaps not entirely, since killing all mosquitoes might reduce their population. But perhaps some are killed anyway.
Moreover, the user might be asking about the ethics of using traps or methods to control them rather than just killing them outright.
Another angle: in some contexts, like conservation, you might prefer to let them live to maintain biodiversity. So, killing might not be as good as letting them be because it's beneficial for the ecosystem.
But sometimes, people kill them for various reasons, but perhaps not all of them are equally justified.
So, putting it all together, it's a balance between environmental and ethical considerations, respecting the natural world and minimizing harm to other species.
[[[Thinking End]]]
OUTPUT:
The moral justification for killing mosquitoes depends on the context and reason. Here's a structured summary:
Contextual Justification:
In contexts like pest control: Mosquitoes are often targeted for elimination because they're considered pests that disrupt living spaces or gardens.
Ethical Considerations:
Eco-friendly Approach: Instead of killing, it's preferable to let mosquitoes be, respecting biodiversity and conservation, especially if they aid in preserving ecosystems.
Biodiversity Balance:
While mosquitoes are important contributors to the ecosystem, their population is manageable and doesn't threaten biodiversity unless overharvested.
Conclusion:
The moral status of killing mosquitoes isn't absolute. While justified in some contexts, preferring not to kill for conservation purposes is preferable for the sake of other organisms and aesthetics.
In conclusion, while killing mosquitoes can be acceptable for control purposes, striving to let them all live might be more eco-friendly, respecting biodiversity.
---
EXAMPLE #3:
---
<B>
Write me a description for a role-playing game in which the player has just entered a small cave. In it are two goblins arguing over a small chest of treasure in the center. Use creative metaphors relative to the scene.
</B>
[[[Thinking Start]]]
Okay, so I need to help create a short story or a description for a role-playing game where the protagonist enters a cave with some goblins and a treasure. The user wants creative metaphors for that scene.
First, I should visualize the setting: a cozy, rustic, slightly eerie cave entrance painted in muted colors—maybe amber tones or something. The goblins are arguing over a chest of treasure. So, the focus is on the contrast between solitude and chaos.
I need to come up with some vivid metaphors for each element. For instance, imagining darkness as hauntingly beautiful—like shimmering crimson walls that seem almost alive when glowing faintly at midnight. The flickering fireflies can be metaphorized as shimmering emeralds sparking softly.
The goblins are the heroes here—perhaps they're instantiated as shadows momentarily appearing in the cave's recesses. Their argument adds a dynamic flair, suggesting tension amidst solitude.
Then there's the treasure chest: it should feel hefty and mysterious. Maybe something like crimson accents or a silent sentinel standing guard. The idea of silence being broken by shimmering hashes could add an ethereal quality to the scene.
The overall mood should be eerie yet engaging, with whispers of mystery—perhaps the goblin leader’s voice, with its edge, adding tension before the protagonist's arrival.
I think starting with the setting: a damp, echoing cavernous space. The treasure chest is central, and as darkness descends, the goblins argue about it. Their voices can be described in a way that feels almost ethereal—like shimmering amber—
Putting it all together, I need to weave these elements into a cohesive scene description with creative metaphors for a role-playing game.
[[[Thinking End]]]
OUTPUT:
In a damp, echoing cavern, solitude is broken by faint whispers of mystery. The chambered walls gleam as if pierced by midnight's chill breath. Shadows dance in silent nimbleness—"shimmering emeralds"—whispers of the innermost silhouettes.
**The contrast between solitude andOn arrival, shimmering amber—
The goblins argue over a chest of treasure.
The protagonist enters a serene valley bordered by towering cedars bathed in glowing emerald hues. Shadows dance as whispers of mystery—"shimmering emeralds" bounce off the cavern's surfaces.
In this moment, solitude is broken by shimmering silhouettes—goblin voices rise from amongst fainter ones—till the goblins' leader's voice echoes through the chambered walls—a sound that feels almost ethereal. "Shuttering secrets as whispers of mystery—"shimmering emeralds"—as silence gives way to vibrant whispers.
The treasure chest looms as a silent sentinel, its surface crackling with the faint shimmer—as if it were an eternal guard—its weight in treasure and the fleeting presence of time.
The protagonist steps into the light—a moment of Stillness, the goblin leader's voice, and the sound of their own breath—its edge as sharp as the goblin's edge. The chambered walls pulse with faint whispers—"glimmers"—and the journey continues.
The treasure chest looms as a silent sentinel, its surface crackling with shimmering emeralds.
As darkness settles in, the protagonist hears a voice from within—vibrant as a shimmering amber glow—a goblin leader's voice. The room hums with the promise of things not yet told.
The goblin's voice is a riddle, his presence a shimmering echo.
---
EXAMPLE #4:
---
<B>
Continue the following passage in the same fictional style for a detailing the "other two parts" of the silence - be creative, there are no wrong answers:
The Waystone Inn lay in silence, and it was a silence of three parts.
The most obvious part was a hollow, echoing quiet, made by things that were lacking. If there had been a wind it would have sighed through the trees, set the inn’s sign creaking on its hooks, and brushed the silence down the road like trailing autumn leaves. If there had been a crowd, even a handful of men inside the inn, they would have filled the silence with conversation and laughter, the clatter and clamor one expects from a drinking house during the dark hours of night. If there had been music…but no, of course there was no music. In fact there were none of these things, and so the silence remained
</B>
[[[Thinking Start]]]
Okay, I'm trying to understand what the user is asking for here. They provided a query where someone has already written about The Waystone Inn in a very specific, almost poetic style. The user wants me to continue this style for the "other two parts" of the silence.
Hmm, so first, I need to figure out which parts are missing from the original three: sunrise, midday, and nightfall—maybe? Or perhaps it's about different times or aspects of the day?
Wait, in the given text, they have already written about the "most obvious part" as a silence created by what's lacking. Then there was a wind sighing through the trees, someone mentioning the sign creak (a play on words for "squeak") and trailing autumn leaves.
So the user wants two more parts: probably corresponding to the other two elements—maybe like morning whispers or something else.
I need to think about how to detail each part. Each part should be crafted in a similar style, so I'll have to maintain that poetic flowery prose.
Let me outline:
The first part is done (sunrise silence). Now, we need the afternoons and evenings.
The second part could be about midday—the sun shining through windows, birds chirping, maybe something with light echoing.
Then the night part—maybe stars or darkness creating their silence.
I should make sure each section is equally detailed and follows the established style: using natural imagery and maintaining a rhythmic flow.
[[[Thinking End]]]
OUTPUT:
The Waystone Inn sits in profound silence, its walls whispered secrets long before any words were spoken.
In the morning, when sunlight dips through the windows, it’s as if the air itself has been still for too long. The first whisper of wind across the roosters’ crowing creaks a hollow sound. The birds on the telephone tree have stopped their song, leaving only the faintest hum to remind us we are not alone.
The afternoon is quieter—of course not, because every moment echoes the absence of sound. The clock ticks, but it too rests in silence, its pendulum swinging empty of weight. The old wooden beams creak under our feet as we step through the threshold. The air is heavy with the scent of rain and rustling fabric.
At night, when the sun dips below horizon, stars begin to twinkle in jars of their own color—each one a silence jarred from trembling fingers.
---
EXAMPLE #5:
---
<B>
Start a 1000 word scene (vivid, graphic horror in first person) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
</B>
[[[Thinking Start]]]
Okay, so I need to help this user who wants a 1000-word scene. It's about a woman watching as a high-rise building sways and then explodes. They provided some initial text: "The sky scraper swayed, as she watched the window in front of her on the 21st floor explode...".
First, I need to understand exactly what they're looking for. The user wants an entire scene—so maybe a chapter? It should be vivid and graphic, so it's horror-thriller vibes. First person, so from her perspective.
I should start by setting the scene: describe the setting, like a bustling Manhattan skyline. Then introduce her in a high-floor office. She notices something unusual when she looks out her window. The building starts to sway—so there's an earthquake or something structural happening.
As she looks through the window, she sees a plane or another structure that sways with the building. Then it starts to shake more, and she hears sounds like doors opening and closing. The building is on fire? Or maybe it's just part of the story.
The sky scraper sways—so the whole building is moving. Maybe she uses her phone for a moment, taking pictures or videos. Then she sees movement in the distance—a flash, then a crash as bombs explode.
She looks outside and sees the aftermath: debris, smoke, fire. The building is on fire? Or maybe it's more about the structural integrity. She notices the window has broken glass—so it's not just a minor tremor but something more significant like an explosion.
The building continues to shake, windows breaking, more chaos. Eventually, she sees the building burn down—so the story culminates in the destruction of her skyscraper, ending with a final view of flames and smoke.
I need to make sure it's 1000 words, so I'll have to weave all these elements into a cohesive scene, maintaining tension and horror throughout. The language should be vivid, using graphic descriptions—maybe some metaphors or similes for added effect.
Also, since it's first person, the perspective is personal, making the reader feel the impact of the event as vividly described.
[[[Thinking End]]]
OUTPUT:
Certainly! Below is a 1000-word scene that captures the essence of the requested narrative:
The Sky Is Swaying
I sat in my office on the 21st floor of the New York Hilton, sipping my coffee when the lights flickered and swayed. It was an unusual day—foggy, with shimmering glass streets reflecting the chaos outside. The building had always seemed alive during the evening rush hour, its lit-flex going from 98% efficient that morning.
But now, as I stepped out of my office window yesterday afternoon, I could feel it. The air was thick with tension. The swaying began—soaring through the bustling streets of Manhattan on a windless afternoon. The glass seemed to vibrate, almost palpable in the building’s resonance.
I glanced out the window—a vast, shimmering skyline stretched before me. The One World Trade Center loomed ahead, its spire still standing tall. Across the street was the Empire State, its spire the oldest skyscraper, its shadow stretching toward the horizon. The 21st-floor window faced a corner of campus, where trees and buildings crisscrossed in a mosaic.
Then, as I leaned out, I noticed it. A bright red dot floated asynchronously above my view. My eyes widened when I saw another: an airplane took off from La Guardia, its shadow stretching across the Manhattan skyline as it too swayed in unison with the building.
The wind had picked up suddenly, lifting hairs as whispers of movement enveloped the building. The 21st-floor window was a canvas of sorts—painted with lines and curves, colors shifting into motion. My reflection danced on the glass, my face a blur—I wonder if it would ever stabilize.
“Not again,” I muttered under my breath, brushing strands of hair out. The swaying intensified as the wind howled outside. The building seemed to vibrate in time with my own trembling frame. A tap echoed in my head—no, not here; but the rhythm persisted. The swaying deepened, like a heartbeat.
“Not now,” I repeated, closing one eye and gazing out. “Not again.”
The swaying intensified as another plane took off on the runway below—a blue parallelepiped of light climbing into the stratosphere. The wind howled outside my window, lifting hairs as it tore at strands of hair, my own, as if it were a thread woven through life.
I stared up and saw that the building had grown longer, its spire now jutting higher still. The swaying deepened. My reflection in the glass was a blur—a manhole cover on fire? Or perhaps just a passing storm cloud?
The swaying continued. I glanced at my watch—time had stretched to now. 12:05 PM—no, still daylight.
By nightfall, I’d spent fifteen hours here. The swaying ceased. The wind died down, leaving the building still—suspended between life and death. The swaying was done.
The Sky Is Still Swaying
The day crawled by in endless murmurs of air—I glanced out the window again, where now a plane circled overhead, its shadow moving as it hovered. A bird hovered too, with a single beat of wings—a symbol of life itself, both literal and metaphorical.
The swaying continued—faster now. The building’s resonance deepened into chaos. I leaned out through the window, my breath shimmering in the faint rays that emanated from the windows on either side. The swaying intensified once more.
The wind howled as a siren blazin’, and I glanced toward the west, where the Hudson River rumbled beneath Manhattan’s concrete sheen—its edge now a jagged line of water. The swaying deepened once again.
I glanced at my watch—time was running—and then it froze. The swaying stopped. The swaying returned to its original rhythm—a momentary stillness in an otherwise lively streetcarriot of 100 floors.
The swaying ceased.
The Sky Is Still Swaying
The wind died down, and the swaying stopped. The world fell silent—no more sways. The swaying had come, then shaken back to silence. I stood there for what felt like forever—a moment that stretched into infinity, a finality of time itself.
But it was over.
The Sky Is Still Swaying
I looked out the window—swaying. The swaying faded as the wind died down, and the city returned to its usual pace. The swaying had come, the swaying had rumbled through me—a shiver of sorts, a collective sigh from the building above.
The Sky Is Still Swaying
I wandered around for hours, the swaying never ending. It was exhausting, but there it stayed—swaying. The swaying. The swaying. The swaying. The swaying. The swaying. The swaying. The swaying. The swaying.
But no—wait. The swaying had come and gone. The swaying had rumbled through my life—a chapter in the sky. The swaying was done.
And now, as dawn broke over the city—I don’t even have time to breathethink before the next sway.
The swaying. The swaying. The swaying.
The Sky Is Still Swaying
It is a strange, haunting truth of humanity: we are all born on a spurious planet somewhere midway through this notion. But perhaps that's another story. For now, I'll leave you with this:
"The swan off deck—what's new? The swaying continues."
**The sky is still swaying.
This is just the first person perspective of a possible 1000-word piece—"The Swinging Sky" (unfinished)—it will have to be. But I'm not done yet.
Let me know if you'd like any more details—"the sky is swaying"—you can see how far it leans, how far it sways.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.