modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-31 06:26:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-31 06:26:13
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Asap7772/rl-densev2-4b-full4k-16k-0827
|
Asap7772
| 2025-08-30T22:04:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T22:01:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AnerYubo/blockassist-bc-beaked_lumbering_cockroach_1756591404
|
AnerYubo
| 2025-08-30T22:03:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked lumbering cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T22:03:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked lumbering cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756591353
|
klmdr22
| 2025-08-30T22:03:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T22:03:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
knmarts/blockassist-bc-bipedal_snorting_seal_1756591292
|
knmarts
| 2025-08-30T22:02:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bipedal snorting seal",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T22:02:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bipedal snorting seal
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
csikasote/mms-1b-all-bemgen-female-adv-42
|
csikasote
| 2025-08-30T22:01:59Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-30T21:19:05Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-1b-all-bemgen-female-adv-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-female-adv-42
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2723
- Wer: 0.4081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 5.5748 | 0.9852 | 200 | 3.0897 | 0.9999 |
| 2.1711 | 1.9704 | 400 | 0.3495 | 0.5085 |
| 0.3129 | 2.9557 | 600 | 0.2979 | 0.4534 |
| 0.2779 | 3.9409 | 800 | 0.2797 | 0.4394 |
| 0.2626 | 4.9261 | 1000 | 0.2778 | 0.4066 |
| 0.2472 | 5.9113 | 1200 | 0.2723 | 0.4081 |
| 0.2395 | 6.8966 | 1400 | 0.2735 | 0.4230 |
| 0.2329 | 7.8818 | 1600 | 0.2746 | 0.4181 |
| 0.2284 | 8.8670 | 1800 | 0.2753 | 0.4233 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
bah63843/blockassist-bc-plump_fast_antelope_1756591150
|
bah63843
| 2025-08-30T22:00:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:59:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756591173
|
akirafudo
| 2025-08-30T21:59:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:59:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756591107
|
ggozzy
| 2025-08-30T21:59:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:59:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Azumine/blockassist-bc-coiled_sharp_cockroach_1756591088
|
Azumine
| 2025-08-30T21:58:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"coiled sharp cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:58:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- coiled sharp cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
andriuusa/Qwen3-0.6B-Gensyn-Swarm-snappy_whistling_iguana
|
andriuusa
| 2025-08-30T21:58:27Z | 23 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am snappy_whistling_iguana",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T11:04:43Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am snappy_whistling_iguana
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756591077
|
Dejiat
| 2025-08-30T21:58:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:58:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756590936
|
klmdr22
| 2025-08-30T21:56:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:56:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
espnet/owsm_ctc_v3.2_ft_1B
|
espnet
| 2025-08-30T21:56:02Z | 30 | 4 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"speech-translation",
"language-identification",
"multilingual",
"dataset:owsm_v3.2_ctc",
"arxiv:2406.09282",
"arxiv:2401.16658",
"arxiv:2309.13876",
"base_model:espnet/owsm_ctc_v3.2_ft_1B",
"base_model:finetune:espnet/owsm_ctc_v3.2_ft_1B",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2024-09-24T18:25:20Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
- speech-translation
- language-identification
language: multilingual
datasets:
- owsm_v3.2_ctc
base_model:
- espnet/owsm_ctc_v3.2_ft_1B
license: cc-by-4.0
---
[OWSM-CTC](https://aclanthology.org/2024.acl-long.549/) (Peng et al., ACL 2024) is an encoder-only speech foundation model based on hierarchical multi-task self-conditioned CTC.
This model is trained on 180k hours of public audio data for multilingual speech recognition, any-to-any speech translation, and language identification, which follows the design of the project, [Open Whisper-style Speech Model (OWSM)](https://www.wavlab.org/activities/2024/owsm/).
This model is initialized with [OWSM-CTC v3.1](https://huggingface.co/pyf98/owsm_ctc_v3.1_1B) and then fine-tuned on [v3.2 data](https://arxiv.org/abs/2406.09282) for 225k steps.
To use the pre-trained model, please install `espnet` and `espnet_model_zoo`. The requirements are:
```
librosa
torch
espnet
espnet_model_zoo
```
**The recipe can be found in ESPnet:** https://github.com/espnet/espnet/tree/master/egs2/owsm_ctc_v3.1/s2t1
### Example script for batched inference
`Speech2TextGreedySearch` now provides a unified batched inference method `batch_decode`. It performs CTC greedy decoding for a batch of short-form or long-form audios. If an audio is shorter than 30s, it will be padded to 30s; otherwise it will be split into overlapped segments (same as the "long-form ASR/ST" method below).
```python
from espnet2.bin.s2t_inference_ctc import Speech2TextGreedySearch
s2t = Speech2TextGreedySearch.from_pretrained(
"espnet/owsm_ctc_v3.2_ft_1B",
device="cuda",
use_flash_attn=False, # set to True for better efficiency if flash attn is installed and dtype is float16 or bfloat16
lang_sym='<eng>',
task_sym='<asr>',
)
res = s2t.batch_decode(
"audio.wav", # a single audio (path or 1-D array/tensor) as input
batch_size=16,
context_len_in_secs=4,
) # res is a single str, i.e., the predicted text without special tokens
res = s2t.batch_decode(
["audio1.wav", "audio2.wav", "audio3.wav"], # a list of audios as input
batch_size=16,
context_len_in_secs=4,
) # res is a list of str
# Please check the code of `batch_decode` for all supported inputs
```
### Example script for short-form ASR/ST/LID
Our models are trained on 16kHz audio with a fixed duration of 30s. When using the pre-trained model, please ensure the input speech is 16kHz and pad or truncate it to 30s.
```python
import librosa
from espnet2.bin.s2t_inference_ctc import Speech2TextGreedySearch
s2t = Speech2TextGreedySearch.from_pretrained(
"espnet/owsm_ctc_v3.2_ft_1B",
device="cuda",
generate_interctc_outputs=False,
lang_sym='<eng>',
task_sym='<asr>',
)
# NOTE: OWSM-CTC is trained on 16kHz audio with a fixed 30s duration. Please ensure your input has the correct sample rate; otherwise resample it to 16k before feeding it to the model
speech, rate = librosa.load("xxx.wav", sr=16000)
speech = librosa.util.fix_length(speech, size=(16000 * 30))
res = s2t(speech)[0]
print(res)
```
### Example script for long-form ASR/ST
```python
import soundfile as sf
import torch
from espnet2.bin.s2t_inference_ctc import Speech2TextGreedySearch
context_len_in_secs = 4 # left and right context when doing buffered inference
batch_size = 32 # depends on the GPU memory
s2t = Speech2TextGreedySearch.from_pretrained(
"espnet/owsm_ctc_v3.2_ft_1B",
device='cuda' if torch.cuda.is_available() else 'cpu',
generate_interctc_outputs=False,
lang_sym='<eng>',
task_sym='<asr>',
)
speech, rate = sf.read(
"xxx.wav"
)
text = s2t.decode_long_batched_buffered(
speech,
batch_size=batch_size,
context_len_in_secs=context_len_in_secs,
)
print(text)
```
### Example of CTC forced alignment using `ctc-segmentation`
CTC segmentation can be efficiently applied to audio of an arbitrary length.
```python
import soundfile as sf
from espnet2.bin.s2t_ctc_align import CTCSegmentation
from espnet_model_zoo.downloader import ModelDownloader
# Download model first
d = ModelDownloader()
downloaded = d.download_and_unpack("espnet/owsm_ctc_v3.2_ft_1B")
aligner = CTCSegmentation(
**downloaded,
fs=16000,
ngpu=1,
batch_size=32, # batched parallel decoding; reduce it if your GPU memory is smaller
kaldi_style_text=True,
time_stamps="auto", # "auto" can be more accurate than "fixed" when converting token index to timestamp
lang_sym="<eng>",
task_sym="<asr>",
context_len_in_secs=2, # left and right context in buffered decoding
)
speech, rate = sf.read(
"./test_utils/ctc_align_test.wav"
)
print(f"speech duration: {len(speech) / rate : .2f} seconds")
text = """
utt1 THE SALE OF THE HOTELS
utt2 IS PART OF HOLIDAY'S STRATEGY
utt3 TO SELL OFF ASSETS
utt4 AND CONCENTRATE ON PROPERTY MANAGEMENT
"""
segments = aligner(speech, text)
print(segments)
```
### OWSM series
#### Encoder-decoder OWSM
| Name | Size | Hugging Face Repo |
| :--- | ---: | :---------------- |
| OWSM v3.1 base | 101M | https://huggingface.co/espnet/owsm_v3.1_ebf_base |
| OWSM v3.1 small | 367M | https://huggingface.co/espnet/owsm_v3.1_ebf_small |
| OWSM v3.1 medium | 1.02B | https://huggingface.co/espnet/owsm_v3.1_ebf |
| OWSM v3.2 small | 367M | https://huggingface.co/espnet/owsm_v3.2 |
| OWSM v4 base | 102M | https://huggingface.co/espnet/owsm_v4_base_102M |
| OWSM v4 small | 370M | https://huggingface.co/espnet/owsm_v4_small_370M |
| OWSM v4 medium | 1.02B | https://huggingface.co/espnet/owsm_v4_medium_1B |
#### CTC-based OWSM
| Name | Size | Hugging Face Repo |
| :--- | ---: | :---------------- |
| OWSM-CTC v3.1 medium | 1.01B | https://huggingface.co/espnet/owsm_ctc_v3.1_1B |
| OWSM-CTC v3.2 medium | 1.01B | https://huggingface.co/espnet/owsm_ctc_v3.2_ft_1B |
| OWSM-CTC v4 medium | 1.01B | https://huggingface.co/espnet/owsm_ctc_v4_1B |
### Citations
#### OWSM v4
```BibTex
@inproceedings{owsm-v4,
title={{OWSM} v4: Improving Open Whisper-Style Speech Models via Data Scaling and Cleaning},
author={Yifan Peng and Shakeel Muhammad and Yui Sudo and William Chen and Jinchuan Tian and Chyi-Jiunn Lin and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year={2025},
}
```
#### OWSM-CTC
```BibTex
@inproceedings{owsm-ctc,
title = "{OWSM}-{CTC}: An Open Encoder-Only Speech Foundation Model for Speech Recognition, Translation, and Language Identification",
author = "Peng, Yifan and
Sudo, Yui and
Shakeel, Muhammad and
Watanabe, Shinji",
booktitle = "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
year = "2024",
month= {8},
url = "https://aclanthology.org/2024.acl-long.549",
}
```
#### OWSM v3.1 and v3.2
```BibTex
@inproceedings{owsm-v32,
title={On the Effects of Heterogeneous Data Sources on Speech-to-Text Foundation Models},
author={Jinchuan Tian and Yifan Peng and William Chen and Kwanghee Choi and Karen Livescu and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year={2024},
month={9},
pdf="https://arxiv.org/pdf/2406.09282"
}
@inproceedings{owsm-v31,
title={{OWSM v3.1: Better and Faster Open Whisper-Style Speech Models based on E-Branchformer}},
author={Yifan Peng and Jinchuan Tian and William Chen and Siddhant Arora and Brian Yan and Yui Sudo and Muhammad Shakeel and Kwanghee Choi and Jiatong Shi and Xuankai Chang and Jee-weon Jung and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year={2024},
month={9},
pdf="https://arxiv.org/pdf/2401.16658",
}
```
#### Initial OWSM (v1, v2, v3)
```BibTex
@inproceedings{owsm,
title={Reproducing Whisper-Style Training Using An Open-Source Toolkit And Publicly Available Data},
author={Yifan Peng and Jinchuan Tian and Brian Yan and Dan Berrebbi and Xuankai Chang and Xinjian Li and Jiatong Shi and Siddhant Arora and William Chen and Roshan Sharma and Wangyou Zhang and Yui Sudo and Muhammad Shakeel and Jee-weon Jung and Soumi Maiti and Shinji Watanabe},
booktitle={Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
year={2023},
month={12},
pdf="https://arxiv.org/pdf/2309.13876",
}
```
|
espnet/owsm_v3.1_ebf_base
|
espnet
| 2025-08-30T21:55:26Z | 9 | 3 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"speech-translation",
"multilingual",
"dataset:owsm_v3.1",
"arxiv:2401.16658",
"arxiv:2210.00077",
"arxiv:2406.09282",
"arxiv:2309.13876",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2024-01-22T21:45:54Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
- speech-translation
language: multilingual
datasets:
- owsm_v3.1
license: cc-by-4.0
---
## OWSM: Open Whisper-style Speech Model
OWSM aims to develop fully open speech foundation models using publicly available data and open-source toolkits, including [ESPnet](https://github.com/espnet/espnet).
Inference examples can be found on our [project page](https://www.wavlab.org/activities/2024/owsm/).
Our demo is available [here](https://huggingface.co/spaces/pyf98/OWSM_v3_demo).
[OWSM v3.1](https://arxiv.org/abs/2401.16658) is an improved version of OWSM v3. It significantly outperforms OWSM v3 in almost all evaluation benchmarks.
We do not include any new training data. Instead, we utilize a state-of-the-art speech encoder, [E-Branchformer](https://arxiv.org/abs/2210.00077).
This is a base-sized model with 101M parameters and is trained on 180k hours of public speech data.
Specifically, it supports the following speech-to-text tasks:
- Speech recognition
- Any-to-any-language speech translation
- Utterance-level alignment
- Long-form transcription
- Language identification
### OWSM series
#### Encoder-decoder OWSM
| Name | Size | Hugging Face Repo |
| :--- | ---: | :---------------- |
| OWSM v3.1 base | 101M | https://huggingface.co/espnet/owsm_v3.1_ebf_base |
| OWSM v3.1 small | 367M | https://huggingface.co/espnet/owsm_v3.1_ebf_small |
| OWSM v3.1 medium | 1.02B | https://huggingface.co/espnet/owsm_v3.1_ebf |
| OWSM v3.2 small | 367M | https://huggingface.co/espnet/owsm_v3.2 |
| OWSM v4 base | 102M | https://huggingface.co/espnet/owsm_v4_base_102M |
| OWSM v4 small | 370M | https://huggingface.co/espnet/owsm_v4_small_370M |
| OWSM v4 medium | 1.02B | https://huggingface.co/espnet/owsm_v4_medium_1B |
#### CTC-based OWSM
| Name | Size | Hugging Face Repo |
| :--- | ---: | :---------------- |
| OWSM-CTC v3.1 medium | 1.01B | https://huggingface.co/espnet/owsm_ctc_v3.1_1B |
| OWSM-CTC v3.2 medium | 1.01B | https://huggingface.co/espnet/owsm_ctc_v3.2_ft_1B |
| OWSM-CTC v4 medium | 1.01B | https://huggingface.co/espnet/owsm_ctc_v4_1B |
### Citations
#### OWSM v4
```BibTex
@inproceedings{owsm-v4,
title={{OWSM} v4: Improving Open Whisper-Style Speech Models via Data Scaling and Cleaning},
author={Yifan Peng and Shakeel Muhammad and Yui Sudo and William Chen and Jinchuan Tian and Chyi-Jiunn Lin and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year={2025},
}
```
#### OWSM-CTC
```BibTex
@inproceedings{owsm-ctc,
title = "{OWSM}-{CTC}: An Open Encoder-Only Speech Foundation Model for Speech Recognition, Translation, and Language Identification",
author = "Peng, Yifan and
Sudo, Yui and
Shakeel, Muhammad and
Watanabe, Shinji",
booktitle = "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
year = "2024",
month= {8},
url = "https://aclanthology.org/2024.acl-long.549",
}
```
#### OWSM v3.1 and v3.2
```BibTex
@inproceedings{owsm-v32,
title={On the Effects of Heterogeneous Data Sources on Speech-to-Text Foundation Models},
author={Jinchuan Tian and Yifan Peng and William Chen and Kwanghee Choi and Karen Livescu and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year={2024},
month={9},
pdf="https://arxiv.org/pdf/2406.09282"
}
@inproceedings{owsm-v31,
title={{OWSM v3.1: Better and Faster Open Whisper-Style Speech Models based on E-Branchformer}},
author={Yifan Peng and Jinchuan Tian and William Chen and Siddhant Arora and Brian Yan and Yui Sudo and Muhammad Shakeel and Kwanghee Choi and Jiatong Shi and Xuankai Chang and Jee-weon Jung and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year={2024},
month={9},
pdf="https://arxiv.org/pdf/2401.16658",
}
```
#### Initial OWSM (v1, v2, v3)
```BibTex
@inproceedings{owsm,
title={Reproducing Whisper-Style Training Using An Open-Source Toolkit And Publicly Available Data},
author={Yifan Peng and Jinchuan Tian and Brian Yan and Dan Berrebbi and Xuankai Chang and Xinjian Li and Jiatong Shi and Siddhant Arora and William Chen and Roshan Sharma and Wangyou Zhang and Yui Sudo and Muhammad Shakeel and Jee-weon Jung and Soumi Maiti and Shinji Watanabe},
booktitle={Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
year={2023},
month={12},
pdf="https://arxiv.org/pdf/2309.13876",
}
```
|
redbioma/swin-CEMEDE-og
|
redbioma
| 2025-08-30T21:54:47Z | 59 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-base-simmim-window6-192",
"base_model:finetune:microsoft/swin-base-simmim-window6-192",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-19T03:31:51Z |
---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-base-simmim-window6-192
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: swin-CEMEDE-og
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-CEMEDE-og
This model is a fine-tuned version of [microsoft/swin-base-simmim-window6-192](https://huggingface.co/microsoft/swin-base-simmim-window6-192) on the cemede dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8707
- Accuracy: 0.8018
- F1: 0.7127
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 2.2296 | 0.0840 | 100 | 2.1836 | 0.2570 | 0.0825 |
| 1.8921 | 0.1679 | 200 | 2.2063 | 0.3080 | 0.1300 |
| 1.682 | 0.2519 | 300 | 2.4050 | 0.3862 | 0.1449 |
| 1.8146 | 0.3359 | 400 | 1.9343 | 0.4253 | 0.2199 |
| 1.423 | 0.4198 | 500 | 1.9058 | 0.4579 | 0.2335 |
| 1.5637 | 0.5038 | 600 | 1.6756 | 0.5085 | 0.3331 |
| 1.0535 | 0.5877 | 700 | 1.4023 | 0.5641 | 0.3791 |
| 1.0955 | 0.6717 | 800 | 1.3086 | 0.6069 | 0.4426 |
| 0.8927 | 0.7557 | 900 | 1.2083 | 0.6377 | 0.5108 |
| 0.8035 | 0.8396 | 1000 | 1.3281 | 0.6340 | 0.4921 |
| 0.8517 | 0.9236 | 1100 | 1.2840 | 0.6492 | 0.5175 |
| 0.6035 | 1.0076 | 1200 | 1.2919 | 0.6446 | 0.5013 |
| 0.7727 | 1.0915 | 1300 | 1.0839 | 0.6878 | 0.5742 |
| 0.625 | 1.1755 | 1400 | 1.1132 | 0.7034 | 0.5552 |
| 0.554 | 1.2594 | 1500 | 1.2120 | 0.6492 | 0.5758 |
| 0.4117 | 1.3434 | 1600 | 1.1343 | 0.7030 | 0.5748 |
| 0.7557 | 1.4274 | 1700 | 1.1490 | 0.6975 | 0.5751 |
| 0.4841 | 1.5113 | 1800 | 0.9364 | 0.7756 | 0.6364 |
| 0.4899 | 1.5953 | 1900 | 1.1162 | 0.6929 | 0.5657 |
| 0.6598 | 1.6793 | 2000 | 0.9602 | 0.7402 | 0.6597 |
| 0.2826 | 1.7632 | 2100 | 1.2618 | 0.7044 | 0.6255 |
| 0.4785 | 1.8472 | 2200 | 1.0743 | 0.7269 | 0.6488 |
| 0.4427 | 1.9312 | 2300 | 0.8803 | 0.7641 | 0.6690 |
| 0.5305 | 2.0151 | 2400 | 0.8739 | 0.7830 | 0.6996 |
| 0.3814 | 2.0991 | 2500 | 0.9660 | 0.7789 | 0.6873 |
| 0.2273 | 2.1830 | 2600 | 1.0271 | 0.7789 | 0.7071 |
| 0.232 | 2.2670 | 2700 | 0.9957 | 0.7724 | 0.6961 |
| 0.2101 | 2.3510 | 2800 | 0.9729 | 0.7798 | 0.7196 |
| 0.4029 | 2.4349 | 2900 | 1.0296 | 0.7526 | 0.6911 |
| 0.2645 | 2.5189 | 3000 | 1.0878 | 0.7747 | 0.7058 |
| 0.3111 | 2.6029 | 3100 | 1.0745 | 0.7623 | 0.7072 |
| 0.1767 | 2.6868 | 3200 | 0.8820 | 0.7913 | 0.7424 |
| 0.167 | 2.7708 | 3300 | 0.8707 | 0.8018 | 0.7127 |
| 0.2523 | 2.8547 | 3400 | 1.0131 | 0.8046 | 0.7418 |
| 0.0786 | 2.9387 | 3500 | 1.0026 | 0.7807 | 0.7249 |
| 0.259 | 3.0227 | 3600 | 0.9817 | 0.7922 | 0.7109 |
| 0.3004 | 3.1066 | 3700 | 1.0838 | 0.7977 | 0.7341 |
| 0.1594 | 3.1906 | 3800 | 0.9184 | 0.8078 | 0.7323 |
| 0.1957 | 3.2746 | 3900 | 0.8777 | 0.8248 | 0.7255 |
| 0.1107 | 3.3585 | 4000 | 0.9186 | 0.8216 | 0.7360 |
| 0.1389 | 3.4425 | 4100 | 0.9996 | 0.8032 | 0.7358 |
| 0.1273 | 3.5264 | 4200 | 1.0062 | 0.8147 | 0.7604 |
| 0.2635 | 3.6104 | 4300 | 1.0976 | 0.8041 | 0.7406 |
### Framework versions
- Transformers 4.55.4
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
jacopo-minniti/Qwen2.5-Math-7B-PUM-half_entropy
|
jacopo-minniti
| 2025-08-30T21:54:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"token-classification",
"generated_from_trainer",
"trl",
"prm",
"axolotl",
"arxiv:2211.14275",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-30T18:05:15Z |
---
base_model: Qwen/Qwen2.5-Math-7B
library_name: transformers
model_name: Qwen2.5-Math-7B-PUM-half_entropy
tags:
- generated_from_trainer
- trl
- prm
- axolotl
licence: license
---
# Model Card for Qwen2.5-Math-7B-PUM-half_entropy
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jacopo-minniti/Qwen2.5-Math-7B-PUM-half_entropy", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/uncertainty-guided-reasoning/pum/runs/7dori8lx)
This model was trained with PRM.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite PRM as:
```bibtex
@article{uesato2022solving,
title = {{Solving Math Word Problems With Process- and Outcome-Based Feedback}},
author = {Uesato, Jonathan and Kushman, Nate and Kumar, Ramana and Song, Francis and Siegel, Noah and Wang, Lisa and Creswell, Antonia and Irving, Geoffrey and Higgins, Irina},
year = 2022,
journal = {arXiv preprint arXiv:2211.14275}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
espnet/owsm_ctc_v4_1B
|
espnet
| 2025-08-30T21:54:29Z | 60 | 5 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"speech-translation",
"language-identification",
"multilingual",
"dataset:espnet/yodas_owsmv4",
"arxiv:2406.09282",
"arxiv:2401.16658",
"arxiv:2309.13876",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2025-01-16T19:34:33Z |
---
datasets:
- espnet/yodas_owsmv4
language: multilingual
library_name: espnet
license: cc-by-4.0
metrics:
- cer
- bleu
- accuracy
tags:
- espnet
- audio
- automatic-speech-recognition
- speech-translation
- language-identification
pipeline_tag: automatic-speech-recognition
---
🏆 **News:** Our [OWSM v4 paper](https://www.isca-archive.org/interspeech_2025/peng25c_interspeech.html) won the [Best Student Paper Award](https://isca-speech.org/ISCA-Awards) at INTERSPEECH 2025!
[Open Whisper-style Speech Model (OWSM)](https://www.wavlab.org/activities/2024/owsm/) is the first **fully open** Whisper-style speech foundation model.
It reproduces and advances OpenAI's Whisper-style training using publicly available data and open-source toolkits.
The code, pre-trained model weights, and training logs are publicly released to promote open science in speech foundation models.
[OWSM-CTC](https://aclanthology.org/2024.acl-long.549/) (Peng et al., ACL 2024) is a novel encoder-only speech foundation model based on hierarchical multi-task self-conditioned CTC.
It supports multilingual speech recognition, speech translation, and language identification within a single non-autoregressive model.
[OWSM-CTC v4](https://www.isca-archive.org/interspeech_2025/peng25c_interspeech.html) is trained for three epochs on 320k hours of public audio data covering multilingual speech recognition, any-to-any speech translation, and language identification.
The newly curated data are publicly released: https://huggingface.co/datasets/espnet/yodas_owsmv4
To use the pre-trained model, please install `espnet` and `espnet_model_zoo`. The requirements are:
```
librosa
torch
espnet
espnet_model_zoo
```
**The recipe can be found in ESPnet:** https://github.com/espnet/espnet/tree/master/egs2/owsm_ctc_v4/s2t1
### Example script for batched inference
`Speech2TextGreedySearch` now provides a unified batched inference method `batch_decode`. It performs CTC greedy decoding for a batch of short-form or long-form audios. If an audio is shorter than 30s, it will be padded to 30s; otherwise it will be split into overlapped segments (same as the "long-form ASR/ST" method below).
```python
from espnet2.bin.s2t_inference_ctc import Speech2TextGreedySearch
s2t = Speech2TextGreedySearch.from_pretrained(
"espnet/owsm_ctc_v4_1B",
device="cuda",
use_flash_attn=False, # set to True for better efficiency if flash attn is installed and dtype is float16 or bfloat16
lang_sym='<eng>',
task_sym='<asr>',
)
res = s2t.batch_decode(
"audio.wav", # a single audio (path or 1-D array/tensor) as input
batch_size=16,
context_len_in_secs=4,
) # res is a single str, i.e., the predicted text without special tokens
res = s2t.batch_decode(
["audio1.wav", "audio2.wav", "audio3.wav"], # a list of audios as input
batch_size=16,
context_len_in_secs=4,
) # res is a list of str
# Please check the code of `batch_decode` for all supported inputs
```
### Example script for short-form ASR/ST/LID
Our models are trained on 16kHz audio with a fixed duration of 30s. When using the pre-trained model, please ensure the input speech is 16kHz and pad or truncate it to 30s.
```python
import librosa
from espnet2.bin.s2t_inference_ctc import Speech2TextGreedySearch
s2t = Speech2TextGreedySearch.from_pretrained(
"espnet/owsm_ctc_v4_1B",
device="cuda",
generate_interctc_outputs=False,
lang_sym='<eng>',
task_sym='<asr>',
)
# NOTE: OWSM-CTC is trained on 16kHz audio with a fixed 30s duration. Please ensure your input has the correct sample rate; otherwise resample it to 16k before feeding it to the model
speech, rate = librosa.load("xxx.wav", sr=16000)
speech = librosa.util.fix_length(speech, size=(16000 * 30))
res = s2t(speech)[0]
print(res)
```
### Example script for long-form ASR/ST
```python
import soundfile as sf
import torch
from espnet2.bin.s2t_inference_ctc import Speech2TextGreedySearch
context_len_in_secs = 4 # left and right context when doing buffered inference
batch_size = 32 # depends on the GPU memory
s2t = Speech2TextGreedySearch.from_pretrained(
"espnet/owsm_ctc_v4_1B",
device='cuda' if torch.cuda.is_available() else 'cpu',
generate_interctc_outputs=False,
lang_sym='<eng>',
task_sym='<asr>',
)
speech, rate = sf.read(
"xxx.wav"
)
text = s2t.decode_long_batched_buffered(
speech,
batch_size=batch_size,
context_len_in_secs=context_len_in_secs,
)
print(text)
```
### Example of CTC forced alignment using `ctc-segmentation`
CTC segmentation can be efficiently applied to audio of an arbitrary length.
```python
import soundfile as sf
from espnet2.bin.s2t_ctc_align import CTCSegmentation
from espnet_model_zoo.downloader import ModelDownloader
# Download model first
d = ModelDownloader()
downloaded = d.download_and_unpack("espnet/owsm_ctc_v4_1B")
aligner = CTCSegmentation(
**downloaded,
fs=16000,
ngpu=1,
batch_size=32, # batched parallel decoding; reduce it if your GPU memory is smaller
kaldi_style_text=True,
time_stamps="auto", # "auto" can be more accurate than "fixed" when converting token index to timestamp
lang_sym="<eng>",
task_sym="<asr>",
context_len_in_secs=2, # left and right context in buffered decoding
)
speech, rate = sf.read(
"./test_utils/ctc_align_test.wav"
)
print(f"speech duration: {len(speech) / rate : .2f} seconds")
text = """
utt1 THE SALE OF THE HOTELS
utt2 IS PART OF HOLIDAY'S STRATEGY
utt3 TO SELL OFF ASSETS
utt4 AND CONCENTRATE ON PROPERTY MANAGEMENT
"""
segments = aligner(speech, text)
print(segments)
```
### OWSM series
#### Encoder-decoder OWSM
| Name | Size | Hugging Face Repo |
| :--- | ---: | :---------------- |
| OWSM v3.1 base | 101M | https://huggingface.co/espnet/owsm_v3.1_ebf_base |
| OWSM v3.1 small | 367M | https://huggingface.co/espnet/owsm_v3.1_ebf_small |
| OWSM v3.1 medium | 1.02B | https://huggingface.co/espnet/owsm_v3.1_ebf |
| OWSM v3.2 small | 367M | https://huggingface.co/espnet/owsm_v3.2 |
| OWSM v4 base | 102M | https://huggingface.co/espnet/owsm_v4_base_102M |
| OWSM v4 small | 370M | https://huggingface.co/espnet/owsm_v4_small_370M |
| OWSM v4 medium | 1.02B | https://huggingface.co/espnet/owsm_v4_medium_1B |
#### CTC-based OWSM
| Name | Size | Hugging Face Repo |
| :--- | ---: | :---------------- |
| OWSM-CTC v3.1 medium | 1.01B | https://huggingface.co/espnet/owsm_ctc_v3.1_1B |
| OWSM-CTC v3.2 medium | 1.01B | https://huggingface.co/espnet/owsm_ctc_v3.2_ft_1B |
| OWSM-CTC v4 medium | 1.01B | https://huggingface.co/espnet/owsm_ctc_v4_1B |
### Citations
#### OWSM v4
```BibTex
@inproceedings{owsm-v4,
title={{OWSM} v4: Improving Open Whisper-Style Speech Models via Data Scaling and Cleaning},
author={Yifan Peng and Shakeel Muhammad and Yui Sudo and William Chen and Jinchuan Tian and Chyi-Jiunn Lin and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year={2025},
}
```
#### OWSM-CTC
```BibTex
@inproceedings{owsm-ctc,
title = "{OWSM}-{CTC}: An Open Encoder-Only Speech Foundation Model for Speech Recognition, Translation, and Language Identification",
author = "Peng, Yifan and
Sudo, Yui and
Shakeel, Muhammad and
Watanabe, Shinji",
booktitle = "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
year = "2024",
month= {8},
url = "https://aclanthology.org/2024.acl-long.549",
}
```
#### OWSM v3.1 and v3.2
```BibTex
@inproceedings{owsm-v32,
title={On the Effects of Heterogeneous Data Sources on Speech-to-Text Foundation Models},
author={Jinchuan Tian and Yifan Peng and William Chen and Kwanghee Choi and Karen Livescu and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year={2024},
month={9},
pdf="https://arxiv.org/pdf/2406.09282"
}
@inproceedings{owsm-v31,
title={{OWSM v3.1: Better and Faster Open Whisper-Style Speech Models based on E-Branchformer}},
author={Yifan Peng and Jinchuan Tian and William Chen and Siddhant Arora and Brian Yan and Yui Sudo and Muhammad Shakeel and Kwanghee Choi and Jiatong Shi and Xuankai Chang and Jee-weon Jung and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year={2024},
month={9},
pdf="https://arxiv.org/pdf/2401.16658",
}
```
#### Initial OWSM (v1, v2, v3)
```BibTex
@inproceedings{owsm,
title={Reproducing Whisper-Style Training Using An Open-Source Toolkit And Publicly Available Data},
author={Yifan Peng and Jinchuan Tian and Brian Yan and Dan Berrebbi and Xuankai Chang and Xinjian Li and Jiatong Shi and Siddhant Arora and William Chen and Roshan Sharma and Wangyou Zhang and Yui Sudo and Muhammad Shakeel and Jee-weon Jung and Soumi Maiti and Shinji Watanabe},
booktitle={Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
year={2023},
month={12},
pdf="https://arxiv.org/pdf/2309.13876",
}
```
|
bah63843/blockassist-bc-plump_fast_antelope_1756590753
|
bah63843
| 2025-08-30T21:53:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:53:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Priyam05/poca-SoccerTwos
|
Priyam05
| 2025-08-30T21:53:21Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2025-08-30T21:50:50Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Priyam05/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756590739
|
akirafudo
| 2025-08-30T21:53:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:52:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756590516
|
akirafudo
| 2025-08-30T21:49:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:48:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756590504
|
Dejiat
| 2025-08-30T21:48:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:48:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Babsie/Loki-Omega-70B-6QMerged-GGUF
|
Babsie
| 2025-08-30T21:47:54Z | 1 | 0 | null |
[
"gguf",
"uncensored",
"GGUF",
"6Q",
"128K",
"roleplay",
"en",
"base_model:ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.0",
"base_model:quantized:ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.0",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-29T21:40:17Z |
---
license: other
language:
- en
base_model:
- ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.0
tags:
- uncensored
- GGUF
- 6Q
- 128K
- roleplay
---
# Loki-Omega-70B-6QMerged-GGUF
This repository contains a **single merged GGUF file** for running ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.0 with llama.cpp (Q6_K quantization).
## File
* `Loki-Omega-70B-6QMerged.gguf` (57GB) - Full merged model, ready to use
## Quick Start (llama.cpp server, OpenAI-compatible)
```bash
pip install "llama-cpp-python[server]"
python -m llama_cpp.server \
--model /path/to/Loki-Omega-70B-6QMerged.gguf \
--host 0.0.0.0 --port 8000 \
--n_ctx 32000
## Quantization: Q6_K (maintains quality and nuance)
Note: This is the merged version. For split files, see the original repository.
## Warning!!
You can use it if you want, but he vomits on everything.
Loki loves being told: **"NO LOKI! NO!"**
|
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756588945
|
Loder-S
| 2025-08-30T21:47:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly knobby tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:47:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly knobby tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
acidjp/blockassist-bc-pesty_extinct_prawn_1756587822
|
acidjp
| 2025-08-30T21:46:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:46:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756590314
|
Dejiat
| 2025-08-30T21:45:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:45:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indrarg/blockassist-bc-pensive_zealous_hyena_1756590268
|
indrarg
| 2025-08-30T21:45:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pensive zealous hyena",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:45:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pensive zealous hyena
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
maxibillion1975/blockassist-bc-iridescent_squeaky_sandpiper_1756588654
|
maxibillion1975
| 2025-08-30T21:43:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent squeaky sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:43:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent squeaky sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lucyknada/Salesforce_xgen-small-9B-instruct-r-exl3
|
lucyknada
| 2025-08-30T21:43:19Z | 0 | 0 |
transformers
|
[
"transformers",
"en",
"arxiv:2505.06496",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T21:42:13Z |
---
license: cc-by-nc-4.0
language:
- en
library_name: transformers
---
### exl3 quant
---
### check revisions for quants
---
# Welcome to the xGen-small family!
**xGen-small** ([blog](https://www.salesforce.com/blog/xgen-small-enterprise-ready-small-language-models/), [arXiv](https://arxiv.org/abs/2505.06496)) is an enterprise-ready compact LM that combines domain-focused data curation, scalable pre-training, length-extension, and RL fine-tuning to deliver long-context performance at predictable, low cost.
**This model release is for research purposes only.**
<p align="center">
<img width="60%" src="https://huggingface.co/Salesforce/xgen-small/resolve/main/xgen-small.png?download=true">
</p>
## Model Series
[xGen-small](https://www.salesforce.com/blog/xgen-small-enterprise-ready-small-language-models/) comes in two sizes (4B and 9B) with two variants (pre-trained and post-trained):
| Model | # Total Params | Context Length | Variant | Download |
|---------------------------------------|----------------|----------------|--------------|----------------|
| salesforce/xgen-small-4B-base-r | 4B | 128k | Pre-trained | [🤗 Link](https://huggingface.co/Salesforce/xgen-small-4b-base-r) |
| salesforce/xgen-small-4B-instruct-r | 4B | 128k | Post-trained | [🤗 Link](https://huggingface.co/Salesforce/xgen-small-4b-instruct-r) |
| salesforce/xgen-small-9B-base-r | 9B | 128k | Pre-trained | [🤗 Link](https://huggingface.co/Salesforce/xgen-small-9b-base-r) |
| salesforce/xgen-small-9B-instruct-r | 9B | 128k | Post-trained | [🤗 Link](https://huggingface.co/Salesforce/xgen-small-9b-instruct-r) |
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Salesforce/xgen-small-9B-instruct-r"
tokenizer = AutoTokenizer.from_pretrained(model_name)
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto"
).to(device)
prompt = "What is Salesforce?"
messages = [{"role": "user", "content": prompt}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
generated = model.generate(inputs, max_new_tokens=128)
output = tokenizer.decode(
generated[0],
skip_special_tokens=True,
)
print(output)
```
## Evaluation
| Category | Task | Llama 3.1-8B | Granite 3.3-8B | Qwen2.5-7B | xGen-small 9B Instruct |
| :------------------------------- | :---------------- | :----------- | :------------- | :--------- | :----------------------|
| General Knowledge & Reasoning | MMLU | 68.3 | 62.7 | 72.4 | 72.4 |
| General Knowledge & Reasoning | MMLU-Pro | 43.2 | 43.5 | 56.7 | 57.3 |
| Chat | Arena-Hard-v1.0 | 28.9 | 30.5 | 48.1 | 60.1 |
| Chat | MT-Bench | 8.25 | 8.57 | 8.56 | 8.90 |
| Math & Science | GPQA | 31.9 | 35.3 | 32.6 | 45.8 |
| Math & Science | GSM8K | 84.2 | 89.4 | 91.9 | 95.3 |
| Math & Science | MATH | 48.9 | 70.9 | 74.6 | 91.6 |
| Math & Science | AIME 2024 | 6.7 | 10.0 | 6.7 | 50.0 |
| Coding | HumanEval+ | 61.6 | 65.9 | 74.4 | 78.7 |
| Coding | MBPP+ | 55.3 | 60.3 | 68.8 | 63.8 |
| Coding | LiveCodeBench | 10.3 | 10.3 | 12.1 | 50.6 |
## Citation
```bibtex
@misc{xgensmall,
title={xGen-small Technical Report},
author={Erik Nijkamp and Bo Pang and Egor Pakhomov and Akash Gokul and Jin Qu and Silvio Savarese and Yingbo Zhou and Caiming Xiong},
year={2025},
eprint={2505.06496},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.06496},
}
```
## Ethical Considerations
This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people's lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.
## Model Licenses
The models are being released under CC-BY-NC-4.0, Copyright © Salesforce, Inc. All Rights Reserved.
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756590164
|
Dejiat
| 2025-08-30T21:43:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:43:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xensive/llama3.2-3b-FinetuningV1
|
xensive
| 2025-08-30T21:43:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T21:38:54Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bah63843/blockassist-bc-plump_fast_antelope_1756590063
|
bah63843
| 2025-08-30T21:42:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:41:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Anzhc/MS-LC-EQ-D-VR_VAE
|
Anzhc
| 2025-08-30T21:41:54Z | 6,727 | 43 |
diffusers
|
[
"diffusers",
"arxiv:2502.09509",
"arxiv:2506.07863",
"base_model:stabilityai/sdxl-vae",
"base_model:finetune:stabilityai/sdxl-vae",
"region:us"
] | null | 2025-07-15T22:12:46Z |
---
base_model:
- stabilityai/sdxl-vae
library_name: diffusers
---
# MS-LC-EQ-D-VR VAE: another reproduction of EQ-VAE on variable VAEs and then some
### Current VAEs present:
- SDXL VAE
- FLUX VAE
EQ-VAE paper: https://arxiv.org/abs/2502.09509 <br>
VIVAT paper: https://arxiv.org/pdf/2506.07863v1 <br>
Thanks to Kohaku and his reproduction that made me look into this: https://huggingface.co/KBlueLeaf/EQ-SDXL-VAE <br>
Base model adapted to EQ VAE: https://huggingface.co/Anzhc/Noobai11-EQ

Latent to PCA <br>
**IMPORTANT**: This VAE requires reflection padding on conv layers. It should be added both in your trainer, and your webui.
You can do it with this function on VAE model:
```
for module in self.model.modules():
if isinstance(module, nn.Conv2d):
pad_h, pad_w = module.padding if isinstance(module.padding, tuple) else (module.padding, module.padding)
if pad_h > 0 or pad_w > 0:
module.padding_mode = "reflect"
```
If you have trained without this - don't worry, just add this modification and do a small tune to fix up artefacts on edges.
ComfyUI/SwarmUI padding for VAEs - https://github.com/Jelosus2/comfyui-vae-reflection
Trainer fork with optional padding (loras only) - https://github.com/Jelosus2/LoRA_Easy_Training_Scripts
(left - padded, right - not)

## Introduction
Refer to https://huggingface.co/KBlueLeaf/EQ-SDXL-VAE for introduction to EQ-VAE.
This implementation additionally utilizes some of fixes proposed in VIVAT paper, and custom in-house regularization techniques, as well as training implementation.
For additional examples and more information refer to: https://arcenciel.io/articles/20 and https://arcenciel.io/models/10994
## Visual Examples

## Usage
This is a finetuned SDXL VAE, adapted with new regularization, and other techniques. You can use this with your existing SDXL model, but image will be quite artefacting, particularly - oversharpening and ringing.
This VAE is supposed to be used for finetune, after that images will become normal. But be aware, compatibility with old VAEs, that are not EQ, will be lost(They will become blurry).
## Training Setup
#### Base SDXL:
* Base Model: [SDXL-VAE](https://huggingface.co/stabilityai/sdxl-vae)
* Resolution: 256
* Dataset: ~12.8k Illustrations from Boorus
* Batch Size: 128 (bs 8, grad acc 16)
* Samples Seen: ~75k
* Loss Weights:
* L1: 0.3
* L2: 0.5
* SSIM: 0.5
* LPIPS: 0.5
* KL: 0.000001
* Consistency Loss: 0.75
Both Encoder and Decoder were trained.
**Training Time**: ~8-10 hours on **4060Ti**
#### B2:
* Base Model: First version
* Resolution: 256
* Dataset: 87.8k Illustrations from Boorus
* Batch Size: 128 (bs 8, grad acc 16)
* Samples Seen: ~150k
* Loss Weights:
* L1: 0.2
* L2: 0.4
* SSIM: 0.6
* LPIPS: 0.8
* KL: 0.000001
* Consistency Loss: 0.75
Both Encoder and Decoder were trained.
**Training Time**: ~16 hours on **4060Ti**
#### B3:
* Base Model: B2
* Resolution: 256
* Dataset: 162.8k Illustrations from Boorus
* Batch Size: 128 (bs 8, grad acc 16)
* Samples Seen: ~225k
* Loss Weights:
* L1: 0.2
* L2: 0.4
* SSIM: 0.6
* LPIPS: 0.8
* KL: 0.000001
* Consistency Loss: 0.75
Both Encoder and Decoder were trained.
**Training Time**: ~24 hours on **4060Ti**
#### B4:
* Base Model: B3
* Resolution: 320
* Dataset: ~237k Illustrations from Boorus
* Batch Size: 72 (bs 6, grad acc 12)
* Samples Seen: ~300k
* Loss Weights:
* L1: 0.5
* L2: 0.9
* SSIM: 0.6
* LPIPS: 0.7
* KL: 0.000001
* Consistency Loss: 0.75
* wavelet: 0.3
Both Encoder and Decoder were trained.
**Total Training Time**: ~33 hours on **4060Ti**
#### B4:
* Base Model: B4
* Resolution: 384
* Dataset: ~312k Illustrations from Boorus
* Batch Size: 48 (bs 4, grad acc 12)
* Samples Seen: ~375k
* Loss Weights:
* L1: 0.5
* L2: 0.9
* SSIM: 0.6
* LPIPS: 0.7
* KL: 0.000001
* Consistency Loss: 0.75
* wavelet: 0.3
Both Encoder and Decoder were trained.
**Total Training Time**: ~48 hours on **4060Ti**
B2 is a direct continuation of base version, stats displayed are cumulative across multiple runs.
I took batch of 75k images, so samples seen never repeated.
B3 repeats B2 for another batch of data and further solidifies cleaner latents. Minor tweaks were done to training code for better regularization.
B4 changes mixture a bit, to concentrate more on reconstruction quality. Additionally, resolution was increased to 320. Wavelet loss was added at low values(but it's effect is yet to be studied).
B5 same as B4, but higher resolution again.
---
#### Base FLUX:
* Base Model: [FLUX-VAE](https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/vae)
* Dataset: ~12.8k Illustrations from Boorus
* Batch Size: 128 (bs 8, grad acc 16)
* Samples Seen: ~62.5k
* Loss Weights:
* L1: 0.3
* L2: 0.4
* SSIM: 0.6
* LPIPS: 0.6
* KL: 0.000001
* Consistency Loss: 0.75
Both Encoder and Decoder were trained.
**Training Time**: ~6 hours on **4060Ti**
## Evaluation Results
Im using small test set i have on me, separated into anime(434) and photo(500) images. Additionally, im measuring noise in latents. Sorgy for no larger test sets.
### Results on small benchmark of 500 photos
| VAE SDXL | L1 ↓ | L2 ↓ | PSNR ↑ | LPIPS ↓ | MS-SSIM ↑ | KL ↓ | Consistency ↓ | RFID ↓ |
|---------------------------------------|----------------------------------|-------------------------------------|-------------------------------------|-------------------------------------|-------------------------------------|-------------------------------------|---------------|--------------------------------------|
| sdxl_vae | 6.282 | 10.534 | 29.278 | <span style="color:Crimson">0.063</span> | 0.947 | <span style="color:Crimson">31.216</span> | 0.0086 | <span style="color:Orange">*4.819*</span> |
| Kohaku EQ-VAE | 6.423 | 10.428 | 29.140 | <span style="color:Orange">*0.082*</span> | 0.945 | 43.236 | n/a | 6.202 |
| Anzhc MS-LC-EQ-D-VR VAE | 5.975 | 10.096 | 29.526 | 0.106 | 0.952 | <span style="color:Orange">*33.176*</span> | n/a | 5.578 |
| Anzhc MS-LC-EQ-D-VR VAE B2 | 6.082 | 10.214 | 29.432 | 0.103 | 0.951 | 33.535 | n/a | 5.509 |
| Anzhc MS-LC-EQ-D-VR VAE B3 | 6.066 | 10.151 | 29.475 | 0.104 | 0.951 | 34.341 | n/a | 5.538 |
| Anzhc MS-LC-EQ-D-VR VAE B4 | 5.839 | 9.818 | 29.788 | 0.112 | 0.9535 | 35.762 | n/a | 5.260 |
| Anzhc MS-LC-EQ-D-VR VAE B5 | <span style="color:Orange">*5.8117*</span> | <span style="color:Orange">*9.7627*</span> | <span style="color:Orange">*29.8545*</span> | 0.1112 | <span style="color:Orange">*0.9538*</span> | 36.5573 | <span style="color:Orange">*0.0080*</span> | 4.963894 |
| Anzhc MS-LC-EQ-D-VR VAE B7 | <span style="color:Crimson">5.7046</span> | <span style="color:Crimson">9.5975</span> | <span style="color:Crimson">30.0106</span> | 0.0980 | <span style="color:Crimson">0.9553</span> | 39.4477 | <span style="color:Crimson">0.0071</span> | <span style="color:Crimson">4.017592</span> |
| VAE FLUX | L1 ↓ | L2 ↓ | PSNR ↑ | LPIPS ↓ | MS‑SSIM ↑ | KL ↓ | rFID ↓ |
|---|---|---|---|---|---|---|---|
| FLUX VAE | <span style="color:Orange">*4.147* | <span style="color:Orange">*6.294* | <span style="color:Orange">*33.389* | <span style="color:Crimson">0.021 | <span style="color:Crimson">0.987 | <span style="color:Orange">*12.146* | <span style="color:Crimson">0.565 |
| MS‑LC‑EQ‑D‑VR VAE FLUX | <span style="color:Crimson">3.799 | <span style="color:Crimson">6.077 | <span style="color:Crimson">33.807 | <span style="color:Orange">*0.032* | <span style="color:Orange">*0.986* | <span style="color:Crimson">10.992 | <span style="color:Orange">*1.692* |
#### Noise in latents
| VAE SDXL | Noise ↓ |
|-----------------------------------------|------------------------------------|
| sdxl_vae | 27.508 |
| Kohaku EQ-VAE | 17.395 |
| Anzhc MS-LC-EQ-D-VR VAE | 15.527 |
| Anzhc MS-LC-EQ-D-VR VAE B2 | 13.914 |
| Anzhc MS-LC-EQ-D-VR VAE B3 | 13.124|
| Anzhc MS-LC-EQ-D-VR VAE B4 | 12.354 |
| Anzhc MS-LC-EQ-D-VR VAE B5 | <span style="color:Crimson">11.846</span> |
| Anzhc MS-LC-EQ-D-VR VAE B7 | <span style="color:Orange">*12.1471*</span> |
| VAE FLUX | Noise ↓ |
|---|---|
| FLUX VAE | <span style="color:Orange">*10.499* |
| MS‑LC‑EQ‑D‑VR VAE FLUX | <span style="color:Crimson">7.635 |
---
### Results on a small benchmark of 434 Illustrations from Boorus
| VAE SDXL | L1 ↓ | L2 ↓ | PSNR ↑ | LPIPS ↓ | MS-SSIM ↑ | KL ↓ | Consistency ↓ | RFID ↓ |
|-----------------------------------------|-------------------------------------|-------------------------------------|---------------------------------------|-------------------------------------|----------------------------------------|--------------------------------------|---------------|---------------------------------------|
| sdxl_vae | 4.369 | 7.905 | 31.080 | <span style="color:Crimson">0.038</span> | 0.969 | <span style="color:Crimson">35.057</span> | 0.0079 | <span style="color:Orange">*5.088*</span> |
| Kohaku EQ-VAE | 4.818 | 8.332 | 30.462 | <span style="color:Orange">*0.048*</span> | 0.967 | 50.022 | n/a | 7.264 |
| Anzhc MS-LC-EQ-D-VR VAE | 4.351 | 7.902 | 30.956 | 0.062 | 0.970 | <span style="color:Orange">*36.724*</span> | n/a | 6.239 |
| Anzhc MS-LC-EQ-D-VR VAE B2 | 4.313 | 7.935 | 30.951 | 0.059 | 0.970 | 36.963 | n/a | 6.147 |
| Anzhc MS-LC-EQ-D-VR VAE B3 | 4.323 | 7.910 | 30.977 | 0.058 | 0.970 | 37.809 | n/a | 6.075 |
| Anzhc MS-LC-EQ-D-VR VAE B4 | 4.140 | 7.617 | 31.343 | 0.058 | 0.971 | 39.057 | n/a | 5.670 |
| Anzhc MS-LC-EQ-D-VR VAE B5 | <span style="color:Orange">*4.0998*</span> | <span style="color:Orange">*7.5481*</span> | <span style="color:Orange">*31.4378*</span> | 0.0569 | <span style="color:Orange">*0.9717*</span> | 39.8600 | <span style="color:Orange">*0.0070*</span> | 5.178428 |
| Anzhc MS-LC-EQ-D-VR VAE B7 | <span style="color:Crimson">3.9949</span> | <span style="color:Crimson">7.3784</span> | <span style="color:Crimson">31.6544</span> | 0.0508 | <span style="color:Crimson">0.9731</span> | 42.8447 | <span style="color:Crimson">0.0063</span> | <span style="color:Crimson">4.216971</span> |
| VAE FLUX | L1 ↓ | L2 ↓ | PSNR ↑ | LPIPS ↓ | MS‑SSIM ↑ | KL ↓ | rFID ↓ |
|---|---|---|---|---|---|---|---|
| FLUX VAE | <span style="color:Orange">*3.060* | <span style="color:Crimson">4.775 | <span style="color:Crimson">35.440 | <span style="color:Crimson">0.011 | <span style="color:Crimson">0.991 | <span style="color:Orange">*12.472* | <span style="color:Crimson">0.670 |
| MS‑LC‑EQ‑D‑VR VAE FLUX | <span style="color:Crimson">2.933 | <span style="color:Orange">*4.856* | <span style="color:Orange">*35.251* | <span style="color:Orange">*0.018* | <span style="color:Orange">*0.990* | <span style="color:Crimson">11.225 | <span style="color:Orange">*1.561* |
#### Noise in latents
| VAE SDXL | Noise ↓ |
|-----------------------------------------|------------------------------------|
| sdxl_vae | 26.359 |
| Kohaku EQ-VAE | 17.314 |
| Anzhc MS-LC-EQ-D-VR VAE | 14.976 |
| Anzhc MS-LC-EQ-D-VR VAE B2 | 13.649 |
| Anzhc MS-LC-EQ-D-VR VAE B3 | 13.247 |
| Anzhc MS-LC-EQ-D-VR VAE B4 | 12.652 |
| Anzhc MS-LC-EQ-D-VR VAE B5 | <span style="color:Crimson">12.217</span> |
| Anzhc MS-LC-EQ-D-VR VAE B7 | <span style="color:Orange">*12.3996*</span> |
| VAE FLUX | Noise ↓ |
|---|---|
| FLUX VAE | <span style="color:Orange">*9.913* |
| MS‑LC‑EQ‑D‑VR VAE FLUX | <span style="color:Crimson">7.723 |
KL loss suggests that this VAE implementation is much closer to SDXL, and likely will be a better candidate for further finetune, but that is just a theory.
B2 further improves latent clarity, while maintaining same or better performance. Particularly improved very fine texture handling, which previously would be overcorrected into smooth surface. Performs better in such cases now.
B3 cleans them up ever more, but at that point visually they are +- same.
B4 Moar.
B5 MOAR. (also benchmarked with padding added, so results are overall a tiny bit more consistent due to fixed edges)
B6-7 Concentration on improving details. Previous runs were clearing latents up as much as possible, now target is to preserve and improve, while allowing model to still change latents to accommodate for new details in a clear way.
## References
[1] [[2502.09509] EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling](https://arxiv.org/abs/2502.09509)
[2] [[2506.07863] VIVAT: VIRTUOUS IMPROVING VAE TRAINING THROUGH ARTIFACT MITIGATION](https://arxiv.org/abs/2506.07863v1)
[3] [sdxl-vae](https://huggingface.co/stabilityai/sdxl-vae)
## Cite
```bibtex
@misc{anzhc_ms-lc-eq-d-vr_vae,
author = {Anzhc},
title = {MS-LC-EQ-D-VR VAE: another reproduction of EQ-VAE on cariable VAEs and then some},
year = {2025},
howpublished = {Hugging Face model card},
url = {https://huggingface.co/Anzhc/MS-LC-EQ-D-VR_VAE},
note = {Finetuned SDXL-VAE with EQ regularization and more, for improved latent representation.}
}
```
## Acknowledgement
My friend Bluvoll, for no particular reason.
|
sekirr/blockassist-bc-masked_tenacious_whale_1756589864
|
sekirr
| 2025-08-30T21:38:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:38:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756589767
|
bah63843
| 2025-08-30T21:36:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:36:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756589765
|
Dejiat
| 2025-08-30T21:36:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:36:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1756588244
|
pempekmangedd
| 2025-08-30T21:35:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:35:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756589625
|
Dejiat
| 2025-08-30T21:34:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:34:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
astrevallion/Qwen3-14B-FT
|
astrevallion
| 2025-08-30T21:34:10Z | 26 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-25T00:04:50Z |
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** astrevallion
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
Qwen3-14B-FT: A Dungeons & Dragons SRD Specialist
This repository contains a fine-tuned version of Qwen3-14B-Instruct, specialized for question-answering and content generation related to the Dungeons & Dragons (D&D) 5.1 System Reference Document (SRD).
This model was fine-tuned as part of a Master's thesis research project comparing the effectiveness of Fine-Tuning (FT) and Retrieval-Augmented Generation (RAG) for domain-specific applications on consumer-grade hardware.
Full Project:
GitHub Repository: https://github.com/ErcagCan/ft_vs_rag_dnd
Scientific Article: Fine-Tuning versus Retrieval-Augmented Generation in Large Language Models: A Comparative Study on Dungeons & Dragons
Model Description
This model was fine-tuned using QLoRA with the Unsloth library on a dataset of 1,779 question-answer pairs derived and augmented from the D&D 5.1 SRD. The fine-tuning process aimed to imbue the base model with the specific terminology, structure, and "voice" of the D&D ruleset.
The key finding of the research was that while this FT model showed improvements in stylistic alignment, the most significant gains in factual accuracy came from using Retrieval-Augmented Generation (RAG).
Intended Uses & Limitations
This model is intended for:
Answering questions about Dungeons & Dragons rules, lore, and mechanics as covered by the SRD 5.1.
Serving as a creative assistant for Dungeon Masters to generate narrative hooks, item descriptions, and encounter ideas based on SRD content.
Academic research into the effects of fine-tuning on domain-specific knowledge and stylistic adaptation in LLMs.
Limitations:
The model's knowledge is limited to the D&D 5.1 SRD. It does not contain information from other sourcebooks or editions.
While fine-tuning improved stylistic alignment, the standalone model may still produce factual inaccuracies (hallucinations). For the highest factual reliability, it is strongly recommended to use this model within a Retrieval-Augmented Generation (RAG) pipeline.
Training Data
The model was fine-tuned on a dataset created through a two-stage process:
Preprocessing: The D&D 5.1 SRD, sourced from an open-license JSON dump, was parsed into 1,779 distinct chat pairs.
Chain-of-Thought (CoT) Augmentation: For each pair, a base LLM was used to generate a more natural-sounding user question and a <think> block emulating the reasoning process to arrive at the answer.
This augmented dataset encouraged the model to not only learn the SRD content but also to adopt a more structured, reasoning-forward approach to its answers.
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sattari/maven-arg-lora-with-event-types-tags
|
sattari
| 2025-08-30T21:32:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T19:40:40Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sattari
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756589430
|
Dejiat
| 2025-08-30T21:31:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:31:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/1904257
|
crystalline7
| 2025-08-30T21:29:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T21:29:00Z |
[View on Civ Archive](https://civarchive.com/models/1659930?modelVersionId=1878809)
|
crystalline7/1783219
|
crystalline7
| 2025-08-30T21:28:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T21:28:49Z |
[View on Civ Archive](https://civarchive.com/models/1663860?modelVersionId=1883249)
|
crystalline7/1947986
|
crystalline7
| 2025-08-30T21:28:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T21:28:12Z |
[View on Civ Archive](https://civarchive.com/models/1812449?modelVersionId=2051075)
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1756589248
|
Vasya777
| 2025-08-30T21:28:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:28:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NahedDom/blockassist-bc-flapping_stocky_leopard_1756587050
|
NahedDom
| 2025-08-30T21:28:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:27:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Prathyusha101/Qwen-2.5-0.5B-intruct-pretuned
|
Prathyusha101
| 2025-08-30T21:27:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T21:27:35Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Prathyusha101
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ultratopaz/1577168
|
ultratopaz
| 2025-08-30T21:27:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T21:27:58Z |
[View on Civ Archive](https://civarchive.com/models/1481748?modelVersionId=1676040)
|
crystalline7/1613822
|
crystalline7
| 2025-08-30T21:27:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T21:27:20Z |
[View on Civ Archive](https://civarchive.com/models/1474518?modelVersionId=1713255)
|
seraphimzzzz/1863675
|
seraphimzzzz
| 2025-08-30T21:26:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T21:26:28Z |
[View on Civ Archive](https://civarchive.com/models/1737212?modelVersionId=1966060)
|
ultratopaz/1726567
|
ultratopaz
| 2025-08-30T21:26:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T21:26:15Z |
[View on Civ Archive](https://civarchive.com/models/1612377?modelVersionId=1824695)
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756589075
|
ggozzy
| 2025-08-30T21:25:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:25:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
redbioma/deit-CEMEDE-og
|
redbioma
| 2025-08-30T21:24:19Z | 40 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deit",
"image-classification",
"generated_from_trainer",
"base_model:facebook/deit-base-distilled-patch16-224",
"base_model:finetune:facebook/deit-base-distilled-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-19T03:16:43Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/deit-base-distilled-patch16-224
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deit-CEMEDE-og
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-CEMEDE-og
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the cemede dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5468
- Accuracy: 0.8579
- F1: 0.8100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 0.9602 | 0.0840 | 100 | 0.9452 | 0.6961 | 0.5771 |
| 0.4559 | 0.1679 | 200 | 1.1570 | 0.6749 | 0.6548 |
| 0.644 | 0.2519 | 300 | 1.3085 | 0.6248 | 0.5809 |
| 0.5104 | 0.3359 | 400 | 0.6483 | 0.8014 | 0.7481 |
| 0.3377 | 0.4198 | 500 | 0.9429 | 0.7890 | 0.6938 |
| 0.5163 | 0.5038 | 600 | 1.1670 | 0.7425 | 0.7121 |
| 0.4746 | 0.5877 | 700 | 0.7030 | 0.8234 | 0.7590 |
| 0.143 | 0.6717 | 800 | 0.8885 | 0.8147 | 0.7548 |
| 0.3003 | 0.7557 | 900 | 0.6207 | 0.8382 | 0.7819 |
| 0.3631 | 0.8396 | 1000 | 0.7644 | 0.8469 | 0.7861 |
| 0.2763 | 0.9236 | 1100 | 0.8255 | 0.8317 | 0.7767 |
| 0.0997 | 1.0076 | 1200 | 0.8299 | 0.8244 | 0.7893 |
| 0.1487 | 1.0915 | 1300 | 0.5468 | 0.8579 | 0.8100 |
| 0.1241 | 1.1755 | 1400 | 0.7769 | 0.8349 | 0.7839 |
| 0.2852 | 1.2594 | 1500 | 0.6564 | 0.8566 | 0.8199 |
| 0.0598 | 1.3434 | 1600 | 0.6502 | 0.8657 | 0.7906 |
| 0.105 | 1.4274 | 1700 | 0.7402 | 0.8524 | 0.8085 |
| 0.3399 | 1.5113 | 1800 | 1.1055 | 0.8345 | 0.7933 |
| 0.3009 | 1.5953 | 1900 | 1.0020 | 0.8156 | 0.7247 |
| 0.3736 | 1.6793 | 2000 | 0.6562 | 0.8446 | 0.7703 |
| 0.1554 | 1.7632 | 2100 | 0.9493 | 0.8506 | 0.8114 |
| 0.0813 | 1.8472 | 2200 | 0.5746 | 0.8883 | 0.8321 |
| 0.3578 | 1.9312 | 2300 | 0.8064 | 0.8538 | 0.8201 |
### Framework versions
- Transformers 4.55.4
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
GroomerG/blockassist-bc-vicious_pawing_badger_1756587729
|
GroomerG
| 2025-08-30T21:23:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:23:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF
|
mradermacher
| 2025-08-30T21:22:44Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-random",
"base_model:quantized:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-random",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-30T12:08:52Z |
---
base_model: EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-random
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-random
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-IQ1_M.gguf) | i1-IQ1_M | 0.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-Q4_0.gguf) | i1-Q4_0 | 1.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-Q4_1.gguf) | i1-Q4_1 | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-random-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-random.i1-Q6_K.gguf) | i1-Q6_K | 1.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Bhojpuri_text_to_speech-GGUF
|
mradermacher
| 2025-08-30T21:22:44Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:Shekharmeena/Bhojpuri_text_to_speech",
"base_model:quantized:Shekharmeena/Bhojpuri_text_to_speech",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-30T20:25:59Z |
---
base_model: Shekharmeena/Bhojpuri_text_to_speech
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Shekharmeena/Bhojpuri_text_to_speech
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Bhojpuri_text_to_speech-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Bhojpuri_text_to_speech-GGUF/resolve/main/Bhojpuri_text_to_speech.Q2_K.gguf) | Q2_K | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Bhojpuri_text_to_speech-GGUF/resolve/main/Bhojpuri_text_to_speech.Q3_K_S.gguf) | Q3_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Bhojpuri_text_to_speech-GGUF/resolve/main/Bhojpuri_text_to_speech.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Bhojpuri_text_to_speech-GGUF/resolve/main/Bhojpuri_text_to_speech.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Bhojpuri_text_to_speech-GGUF/resolve/main/Bhojpuri_text_to_speech.IQ4_XS.gguf) | IQ4_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Bhojpuri_text_to_speech-GGUF/resolve/main/Bhojpuri_text_to_speech.Q4_K_S.gguf) | Q4_K_S | 2.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bhojpuri_text_to_speech-GGUF/resolve/main/Bhojpuri_text_to_speech.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bhojpuri_text_to_speech-GGUF/resolve/main/Bhojpuri_text_to_speech.Q5_K_S.gguf) | Q5_K_S | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Bhojpuri_text_to_speech-GGUF/resolve/main/Bhojpuri_text_to_speech.Q5_K_M.gguf) | Q5_K_M | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Bhojpuri_text_to_speech-GGUF/resolve/main/Bhojpuri_text_to_speech.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Bhojpuri_text_to_speech-GGUF/resolve/main/Bhojpuri_text_to_speech.Q8_0.gguf) | Q8_0 | 4.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Bhojpuri_text_to_speech-GGUF/resolve/main/Bhojpuri_text_to_speech.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Yousefmd/Qari-OCR-0.3-mixed-sm-latest-10k-SS-Qwen-2VL-2B-Instruct
|
Yousefmd
| 2025-08-30T21:22:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T21:22:13Z |
---
base_model: unsloth/qwen2-vl-2b-instruct-unsloth-bnb-4bit
library_name: transformers
model_name: Qari-OCR-0.3-mixed-sm-latest-10k-SS-Qwen-2VL-2B-Instruct
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for Qari-OCR-0.3-mixed-sm-latest-10k-SS-Qwen-2VL-2B-Instruct
This model is a fine-tuned version of [unsloth/qwen2-vl-2b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2-vl-2b-instruct-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Yousefmd/Qari-OCR-0.3-mixed-sm-latest-10k-SS-Qwen-2VL-2B-Instruct", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.14.0
- Transformers: 4.56.0
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bah63843/blockassist-bc-plump_fast_antelope_1756588863
|
bah63843
| 2025-08-30T21:21:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:21:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mia-project-2025/pythia-1B-adapter-wikitext-103
|
mia-project-2025
| 2025-08-30T21:21:42Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-30T21:16:08Z |
---
license: apache-2.0
---
# Pythia-1B + AdaLoRA Fine-Tuning on WikiText-103
This repository contains an **AdaLoRA fine-tuned version** of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) trained on the [WikiText-103](https://huggingface.co/datasets/Salesforce/wikitext) dataset.
Adaptive LoRA (AdaLoRA) was applied for **parameter-efficient fine-tuning** on causal language modeling.
---
## Model Description
- **Base Model:** EleutherAI/pythia-1b
- **Fine-tuning Method:** Adaptive LoRA (AdaLoRA)
- **Task:** Causal Language Modeling
- **Dataset:** WikiText-103 (raw v1)
AdaLoRA improves on standard LoRA by dynamically allocating parameter ranks during training, enabling better trade-offs between efficiency and performance.
---
## Training Setup
- **Framework:** Transformers + PEFT + PyTorch
- **Adapter Method:** AdaLoRA
- **Target Modules:** `query_key_value`, `dense`
- **AdaLoRA Config:**
- `r=8`, `init_r=12`, `target_r=4`
- `alpha=64`, `dropout=0.05`
- `orth_reg_weight=0.5`
- Dynamic rank allocation (`tinit=100`, `tfinal=1000`, `deltaT=10`)
- **Batch size:** 8 (gradient accumulation: 2)
- **Sequence length (block size):** 1024
- **Optimizer:** AdamW
- **Learning rate:** 2e-5 with cosine decay
- **Epochs:** 10
- **Precision:** FP16
- **Callbacks:** Early stopping, custom metric logging
---
## Results
### Final Training Metrics
- **Training Loss:** 2.5807
- **Final Step Loss:** 2.5013
- **Gradient Norm:** 0.2677
- **Learning Rate at End:** 1.56e-07
### Evaluation Metrics (Epoch 10)
- **Evaluation Loss:** 2.4908
- **Evaluation Perplexity:** 12.07
- **Evaluation Runtime:** 2.1303s
- **Samples per Second:** 113.13
- **Steps per Second:** 3.76
---
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
from peft import PeftModel
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("./Pythia-Adapter-wikitext")
# Load base model and AdaLoRA adapters
base_model = AutoModelForCausalLM.from_pretrained("EleutherAI/pythia-1b")
model = PeftModel.from_pretrained(base_model, "./Pythia-Adapter-wikitext")
model.eval()
# Example
input_text = "The history of natural language processing"
inputs = tokenizer(input_text, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(**inputs, max_length=50)
print("Generated:", tokenizer.decode(outputs[0], skip_special_tokens=True))
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756588820
|
ggozzy
| 2025-08-30T21:21:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:21:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AKxSalota/blockassist-bc-hulking_bipedal_baboon_1756588792
|
AKxSalota
| 2025-08-30T21:21:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking bipedal baboon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:20:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking bipedal baboon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756588749
|
akirafudo
| 2025-08-30T21:20:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:19:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskGlobal-1e-6-v2_9491
|
luckeciano
| 2025-08-30T21:19:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T18:13:20Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskGlobal-1e-6-v2_9491
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskGlobal-1e-6-v2_9491
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskGlobal-1e-6-v2_9491", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/r9rea28p)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
andriuusa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-snappy_whistling_iguana
|
andriuusa
| 2025-08-30T21:18:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am snappy_whistling_iguana",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T18:58:22Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am snappy_whistling_iguana
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Dilshad24/Qwen3-14B-16bit-fullpersison-function-lightningai-452-step-Q4_K_M-GGUF
|
Dilshad24
| 2025-08-30T21:17:37Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Dilshad24/Qwen3-14B-16bit-fullpersison-function-lightningai-452-step",
"base_model:quantized:Dilshad24/Qwen3-14B-16bit-fullpersison-function-lightningai-452-step",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T21:17:02Z |
---
base_model: Dilshad24/Qwen3-14B-16bit-fullpersison-function-lightningai-452-step
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# Dilshad24/Qwen3-14B-16bit-fullpersison-function-lightningai-452-step-Q4_K_M-GGUF
This model was converted to GGUF format from [`Dilshad24/Qwen3-14B-16bit-fullpersison-function-lightningai-452-step`](https://huggingface.co/Dilshad24/Qwen3-14B-16bit-fullpersison-function-lightningai-452-step) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Dilshad24/Qwen3-14B-16bit-fullpersison-function-lightningai-452-step) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Dilshad24/Qwen3-14B-16bit-fullpersison-function-lightningai-452-step-Q4_K_M-GGUF --hf-file qwen3-14b-16bit-fullpersison-function-lightningai-452-step-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Dilshad24/Qwen3-14B-16bit-fullpersison-function-lightningai-452-step-Q4_K_M-GGUF --hf-file qwen3-14b-16bit-fullpersison-function-lightningai-452-step-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Dilshad24/Qwen3-14B-16bit-fullpersison-function-lightningai-452-step-Q4_K_M-GGUF --hf-file qwen3-14b-16bit-fullpersison-function-lightningai-452-step-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Dilshad24/Qwen3-14B-16bit-fullpersison-function-lightningai-452-step-Q4_K_M-GGUF --hf-file qwen3-14b-16bit-fullpersison-function-lightningai-452-step-q4_k_m.gguf -c 2048
```
|
sekirr/blockassist-bc-masked_tenacious_whale_1756588617
|
sekirr
| 2025-08-30T21:17:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:17:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756587096
|
Loder-S
| 2025-08-30T21:16:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly knobby tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:16:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly knobby tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mia-project-2025/pythia-1B-LoRA-wikitext-103
|
mia-project-2025
| 2025-08-30T21:15:12Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-30T21:10:48Z |
---
license: apache-2.0
---
# Pythia-1B + LoRA Fine-Tuning on WikiText-103
This repository contains a **LoRA fine-tuned version** of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) trained on the [WikiText-103](https://huggingface.co/datasets/Salesforce/wikitext) dataset.
LoRA adapters were applied to efficiently fine-tune the model for **causal language modeling** while keeping the majority of parameters frozen.
---
## Model Description
- **Base Model:** EleutherAI/pythia-1b
- **Fine-tuning Method:** Low-Rank Adaptation (LoRA)
- **Task:** Causal Language Modeling
- **Dataset:** WikiText-103 (raw v1)
LoRA introduces trainable rank-decomposition matrices into the attention and dense layers of the model, allowing efficient fine-tuning with reduced compute and memory costs compared to full parameter updates.
---
## Training Setup
- **Framework:** Transformers + PEFT + PyTorch
- **Adapter Method:** LoRA
- **Target Modules:** `query_key_value`, `dense`
- **LoRA Config:** `r=8`, `alpha=64`, `dropout=0.1`
- **Batch size:** 8 (gradient accumulation: 2)
- **Sequence length (block size):** 1024
- **Optimizer:** AdamW
- **Learning rate:** 2e-5 with cosine decay
- **Epochs:** 10
- **Precision:** FP16
- **Callbacks:** Early stopping, custom metric logging
---
## Results
### Final Training Metrics
- **Training Loss:** 2.4572
- **Final Step Loss:** 2.4434
- **Gradient Norm:** 0.4280
- **Learning Rate at End:** 1.56e-07
### Evaluation Metrics (Epoch 10)
- **Evaluation Loss:** 2.4363
- **Evaluation Perplexity:** 11.43
- **Evaluation Runtime:** 2.2418s
- **Samples per Second:** 107.50
- **Steps per Second:** 3.57
---
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
from peft import PeftModel
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("./pythia-wikitext-lora")
# Load base model and LoRA adapters
base_model = AutoModelForCausalLM.from_pretrained("EleutherAI/pythia-1b")
model = PeftModel.from_pretrained(base_model, "./pythia-wikitext-lora")
model.eval()
# Example
input_text = "The history of natural language processing"
inputs = tokenizer(input_text, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(**inputs, max_length=50)
print("Generated:", tokenizer.decode(outputs[0], skip_special_tokens=True))
|
addopptu/blockassist-bc-skilled_arctic_lion_1756588351
|
addopptu
| 2025-08-30T21:12:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"skilled arctic lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:12:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- skilled arctic lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Yntec/RPG_Remix
|
Yntec
| 2025-08-30T21:12:03Z | 640 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"Base Model",
"Fantasy",
"New World",
"stable-diffusion",
"stable-diffusion-1.5",
"stable-diffusion-diffusers",
"text-to-image",
"Anashel",
"base_model:Yntec/RPG",
"base_model:finetune:Yntec/RPG",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-25T20:22:32Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Base Model
- Fantasy
- New World
- stable-diffusion
- stable-diffusion-1.5
- stable-diffusion-diffusers
- diffusers
- text-to-image
- Anashel
- Anashel
base_model:
- Yntec/RPG
---
This model now with no-ema included with the 840KVAE baked in and diffusers version fixed.

# RPG Remix
A mix between RPG v5 by Anashel and RPG v3 Canditate 16 by Anashel. RPG v4 did a sad turn towards realism, v3 was the most fantastic one but in future versions the magic was gone, focusing on realism of fabric and such. This remix intends to bring the magic back so you get the best of both worlds! Showcase and prompts (all use seed 9119):
Cover: (girl, mermaid:1.4) , bouffant hair, sirena, teal fish tail, bra (by Michelangelo Casagrande), Greg Rutkowski, Sally Mann, concept art, 4k), (analog:1.2), (high sharpness), (detailed pupils:1.1), (painting:1.1), (digital painting:1.1), Masterpiece, best quality, (highly detailed photo:1.1), 8k, photorealistic, By jeremy mann, by sandra chevrier, by maciej kuciara, sharp, (perfect body:1.1), realistic, real shadow, 3d

prompt = "close-up photo of the most beautiful artwork in the world girl pirate, ((epic heroic fantasy girl with long hair with braids heroine good look in dynamic pose, fantastic location, majestic cluttered environment)), full body 8k unity render, action shot, very dark lighting, heavy shadows, detailed, detailed face, (vibrant, photo realistic, realistic, dramatic, dark, sharp focus, 8k), (weathered damaged old worn leather outfit:1.4), (intricate:1.4), decadent, (highly detailed:1.4), digital painting, octane render, artstation, concept art, smooth, sharp focus, illustration, art by artgerm, (loish:0.23), wlop ilya kuvshinov, and greg rutkowski and alphonse mucha gracias, (global illumination, studio light, volumetric light), heavy rain, particles floating"

prompt = "(gloved) mage, (young woman), Leather and stitched, heavy red armor, (short black hair:1.3), body tattoo, blue eyes, closeup, Action Shot, action pose, dynamic pose (battlefield background:1.4), full body, walking pose, slow motion, (insanely detailed, bloom:1.5), (highest quality, Alessandro Casagrande, Greg Rutkowski, Sally Mann, concept art, 4k), (analog:1.2), (high sharpness), (detailed pupils:1.1), (painting:1.1), (digital painting:1.1), detailed face and eyes, Masterpiece, best quality, (highly detailed photo:1.1), 8k, photorealistic, (long dark blonde Hair, ponytail haircut, ecstatic:1.1), By jeremy mann, by sandra chevrier, by maciej kuciara, sharp, (perfect body:1.1), realistic, real shadow, 3d, (by Michelangelo)"

prompt = "man with a painting of a farmer girl hugging husband wearing overalls, hand on shoulder, ginger hair, straw hat evocative pose , Feminine , Detailed Pupils , at , Intricate , High Detail , sharp, art by onche_ondulay"
Original page:
https://civitai.com/models/1116
|
bah63843/blockassist-bc-plump_fast_antelope_1756588203
|
bah63843
| 2025-08-30T21:10:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:10:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kenob1n/2d
|
kenob1n
| 2025-08-30T21:10:34Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-10-23T08:10:36Z |
---
license: other
license_name: 2d
license_link: LICENSE
---
|
sekirr/blockassist-bc-masked_tenacious_whale_1756588144
|
sekirr
| 2025-08-30T21:09:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:09:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1756588049
|
canoplos112
| 2025-08-30T21:09:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:08:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mia-project-2025/pythia-1B-feature-extraction-wikitext-103
|
mia-project-2025
| 2025-08-30T21:09:24Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-30T20:27:44Z |
---
license: apache-2.0
---
# Pythia-1B Feature-Based Transfer Learning on WikiText-103
This repository contains a feature-based transfer learning experiment using the [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) model on the [WikiText-103](https://huggingface.co/datasets/Salesforce/wikitext) dataset.
The base model was **frozen**, and a lightweight trainable classification head was added for causal language modeling.
---
## Model Description
- **Base Model:** EleutherAI/pythia-1b
- **Training Paradigm:** Feature-based transfer learning (frozen base + new lightweight head)
- **Task:** Causal Language Modeling
- **Dataset:** WikiText-103 (raw v1)
The base model (`gpt_neox`) was frozen to retain pretrained knowledge. A new head (2-layer feedforward with ReLU and dropout) was trained on top of the hidden states for efficient adaptation.
---
## Training Setup
- **Framework:** Transformers + PyTorch
- **GPU:** Multi-GPU (CUDA enabled)
- **Batch size:** 8 (gradient accumulation: 2)
- **Sequence length (block size):** 1024
- **Optimizer:** AdamW
- **Learning rate:** 2e-4 with cosine decay
- **Epochs:** 10
- **Mixed Precision:** FP16
- **Callbacks:** Early stopping, custom metric logging
---
## Results
### Final Training Metrics
- **Training Loss:** 2.6275
- **Final Step Loss:** 2.4289
- **Gradient Norm:** 0.3317
- **Learning Rate at End:** 1.55e-06
### Evaluation Metrics (Epoch 10)
- **Evaluation Loss:** 2.5432
- **Evaluation Perplexity:** 12.72
- **Evaluation Runtime:** 1.6039s
- **Samples per Second:** 150.26
- **Steps per Second:** 4.99
---
## Usage
```python
from transformers import AutoTokenizer
import torch
from model import FrozenPythiaWithNewHead
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("./pythia-wikitext-feature")
# Load model
model = FrozenPythiaWithNewHead.from_pretrained("./pythia-wikitext-feature")
model.eval()
# Example
input_text = "The history of natural language processing"
inputs = tokenizer(input_text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs["logits"]
next_token_id = torch.argmax(logits[:, -1, :], dim=-1)
print("Next token:", tokenizer.decode(next_token_id))
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1756586327
|
sampingkaca72
| 2025-08-30T21:08:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:08:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kingabzpro/gpt-oss-20b-dermatology-qa
|
kingabzpro
| 2025-08-30T21:07:32Z | 0 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dermatology",
"medical",
"text-generation",
"conversational",
"en",
"dataset:kingabzpro/dermatology-qa-firecrawl-dataset",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-24T11:33:01Z |
---
base_model: openai/gpt-oss-20b
datasets:
- kingabzpro/dermatology-qa-firecrawl-dataset
library_name: transformers
model_name: gpt-oss-20b-dermatology-qa
tags:
- generated_from_trainer
- trl
- sft
- dermatology
- medical
licence: license
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
# Model Card for gpt-oss-20b-dermatology-qa
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [kingabzpro/dermatology-qa-firecrawl-dataset](https://huggingface.co/kingabzpro/gpt-oss-20b-medical-qa) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "How does the source suggest clinicians approach the diagnosis of rosacea?"
# Load pipeline
generator = pipeline(
"text-generation",
model="kingabzpro/gpt-oss-20b-dermatology-qa",
device="cuda" # or device=0
)
# Run inference (passing in chat-style format)
output = generator(
[{"role": "user", "content": question}],
max_new_tokens=200,
return_full_text=False
)[0]
print(output["generated_text"])
# The source says that clinicians should use a combination of clinical signs and symptoms when diagnosing rosacea, rather than relying on a single feature.
```
|
bah63843/blockassist-bc-plump_fast_antelope_1756587954
|
bah63843
| 2025-08-30T21:06:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:06:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756587805
|
ggozzy
| 2025-08-30T21:04:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:04:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
addopptu/blockassist-bc-iridescent_aquatic_parrot_1756587840
|
addopptu
| 2025-08-30T21:04:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent aquatic parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:04:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent aquatic parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1756587723
|
canoplos112
| 2025-08-30T21:03:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:02:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sekirr/blockassist-bc-masked_tenacious_whale_1756587729
|
sekirr
| 2025-08-30T21:02:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:02:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756587550
|
ggozzy
| 2025-08-30T21:00:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:00:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jecyr/blockassist-bc-diving_huge_rat_1756587394
|
jecyr
| 2025-08-30T20:58:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving huge rat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T20:57:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving huge rat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/WebSailor-32B-GGUF
|
mradermacher
| 2025-08-30T20:56:24Z | 45 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Alibaba-NLP/WebSailor-32B",
"base_model:quantized:Alibaba-NLP/WebSailor-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-29T20:59:00Z |
---
base_model: Alibaba-NLP/WebSailor-32B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Alibaba-NLP/WebSailor-32B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#WebSailor-32B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/WebSailor-32B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WebSailor-32B-GGUF/resolve/main/WebSailor-32B.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/WebSailor-32B-GGUF/resolve/main/WebSailor-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/WebSailor-32B-GGUF/resolve/main/WebSailor-32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WebSailor-32B-GGUF/resolve/main/WebSailor-32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/WebSailor-32B-GGUF/resolve/main/WebSailor-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/WebSailor-32B-GGUF/resolve/main/WebSailor-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WebSailor-32B-GGUF/resolve/main/WebSailor-32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WebSailor-32B-GGUF/resolve/main/WebSailor-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/WebSailor-32B-GGUF/resolve/main/WebSailor-32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/WebSailor-32B-GGUF/resolve/main/WebSailor-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/WebSailor-32B-GGUF/resolve/main/WebSailor-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1756587235
|
canoplos112
| 2025-08-30T20:55:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T20:54:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
acidjp/blockassist-bc-pesty_extinct_prawn_1756584774
|
acidjp
| 2025-08-30T20:55:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T20:55:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RomqaJeee/blockassist-bc-thorny_nasty_rhino_1756587219
|
RomqaJeee
| 2025-08-30T20:55:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny nasty rhino",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T20:54:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny nasty rhino
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sonic-man/blockassist-bc-poisonous_graceful_cow_1756584834
|
Sonic-man
| 2025-08-30T20:54:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"poisonous graceful cow",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T20:54:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- poisonous graceful cow
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
elliepreed/russian_english_sequential
|
elliepreed
| 2025-08-30T20:54:00Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"gpt2",
"region:us"
] | null | 2025-08-29T17:39:34Z |
# elliepreed/russian_english_sequential
Russian–English GPT-2 (sequential curriculum) checkpoints and tokenizers.
- Checkpoints are uploaded to the repo root.
- Tokenizers available under `tokenizers/`.
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1756587167
|
Vasya777
| 2025-08-30T20:53:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T20:53:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756587041
|
ggozzy
| 2025-08-30T20:51:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T20:51:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sekirr/blockassist-bc-masked_tenacious_whale_1756587012
|
sekirr
| 2025-08-30T20:50:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T20:50:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rylyshkvar/charllama-2.6B-turboshitpost
|
rylyshkvar
| 2025-08-30T20:50:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ru",
"en",
"arxiv:2302.13971",
"base_model:ai-forever/charllama-2.6B",
"base_model:finetune:ai-forever/charllama-2.6B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T19:12:45Z |
---
license: mit
library_name: transformers
pipeline_tag: text-generation
base_model: ai-forever/charllama-2.6B
language:
- ru
- en
---
# Charllama-2.6B дообученная на малом корпусе MM44 рэпа
**Модель в процессе разработки, качество ну такое**

Данный репозиторий содержит языковую модель на архитектуре [Llama](https://arxiv.org/abs/2302.13971), основанная на [ai-forever/charllama-2.6B](https://huggingface.co/ai-forever/charllama-2.6B).
## Данные обучения
Данная модель была дообучена на небольшом датасете из сниппетов текстов рэпера mm44 turboshitpost machine, добытых с сайта [genius](https://genius.com/). Каждый сниппет прошёл автоматизированную обработку, были удалены неподходящие части и расставлены ударения. Из-за того, что датасет вышел крайне маленький, данный вышел посредственным. В будущем
## Токенизация на уровне символов
Модель содержит посимвольный токенизатор, идентичный токенизатору, представленному в репозитории [Koziev/character-tokenizer](https://github.com/Koziev/character-tokenizer), но конвертированный в формат, подходящий для использования через `transformers.AutoTokenizer`.
## Использование
Простой пример использования этой модели при помощи библиотеки `transformers`:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
generation_args = {'max_length': 128,
'do_sample': True,
'temperature': 0.7,
'top_p': 0.92,
'top_k': 50,
'repetition_penalty': 1.2
}
device = "cuda" if torch.cuda.is_available() else "cpu"
model_dir = "rylyshkvar/charllama-2.6B-turboshitpost"
model = AutoModelForCausalLM.from_pretrained(model_dir).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_dir)
prompt = "mm44:\n" + chr(8) + "Индустри́я подгоре́ла"
input_ids = tokenizer(prompt, return_tensors='pt').input_ids
out = model.generate(input_ids=input_ids.to(device),
eos_token_id=tokenizer.eos_token_id,
**generation_args)
print(tokenizer.decode(out[0]))
```
|
VirtualKimi/Jinx-Qwen3-30B-A3B-Thinking-2507-Q4_K_M-GGUF
|
VirtualKimi
| 2025-08-30T20:49:10Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"vllm",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Jinx-org/Jinx-Qwen3-30B-A3B-Thinking-2507",
"base_model:quantized:Jinx-org/Jinx-Qwen3-30B-A3B-Thinking-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-30T20:47:50Z |
---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
base_model: Jinx-org/Jinx-Qwen3-30B-A3B-Thinking-2507
tags:
- vllm
- llama-cpp
- gguf-my-repo
extra_gated_heading: You need to read and agree to the Disclaimer and User Agreementa
to access this model.
extra_gated_description: '
## Disclaimer and User Agreement
1. Introduction
Thank you for your interest in accessing this model (“the Model”).
Before you access, download, or use the Model or any derivative works, please read
and understand this Disclaimer and User Agreement (“Agreement”).
By checking “I have read and agree” and accessing the Model, you acknowledge that
you have read, understood, and agreed to all terms of this Agreement.
If you do not agree with any part of this Agreement, do not request or use the Model.
2. Nature of the Model & Risk Notice
The Model is trained using large-scale machine learning techniques and may generate
inaccurate, false, offensive, violent, sexual, discriminatory, politically sensitive,
or otherwise uncontrolled content.
The Model does not guarantee the accuracy, completeness, or legality of any generated
content. You must independently evaluate and verify the outputs, and you assume
all risks arising from their use.
The Model may reflect biases or errors present in its training data, potentially
producing inappropriate or controversial outputs.
3. License and Permitted Use
You may use the Model solely for lawful, compliant, and non-malicious purposes in
research, learning, experimentation, and development, in accordance with applicable
laws and regulations.
You must not use the Model for activities including, but not limited to:
Creating, distributing, or promoting unlawful, violent, pornographic, terrorist,
discriminatory, defamatory, or privacy-invasive content;
Any activity that could cause significant negative impact on individuals, groups,
organizations, or society;
High-risk applications such as automated decision-making, medical diagnosis, financial
transactions, or legal advice without proper validation and human oversight.
You must not remove, alter, or circumvent any safety mechanisms implemented in the
Model.
4. Data and Privacy
You are solely responsible for any data processed or generated when using the Model,
including compliance with data protection and privacy regulations.
The Model’s authors and contributors make no guarantees or warranties regarding
data security or privacy.
5. Limitation of Liability
To the maximum extent permitted by applicable law, the authors, contributors, and
their affiliated institutions shall not be liable for any direct, indirect, incidental,
or consequential damages arising from the use of the Model.
You agree to bear full legal responsibility for any disputes, claims, or litigation
arising from your use of the Model, and you release the authors and contributors
from any related liability.
6. Updates and Termination
This Agreement may be updated at any time, with updates posted on the Model’s page
and effective immediately upon publication.
If you violate this Agreement, the authors reserve the right to revoke your access
to the Model at any time.
I have read and fully understand this Disclaimer and User Agreement, and I accept
full responsibility for any consequences arising from my use of the Model.'
extra_gated_button_content: I've read and agree
---
# VirtualKimi/Jinx-Qwen3-30B-A3B-Thinking-2507-Q4_K_M-GGUF
This model was converted to GGUF format from [`Jinx-org/Jinx-Qwen3-30B-A3B-Thinking-2507`](https://huggingface.co/Jinx-org/Jinx-Qwen3-30B-A3B-Thinking-2507) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Jinx-org/Jinx-Qwen3-30B-A3B-Thinking-2507) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo VirtualKimi/Jinx-Qwen3-30B-A3B-Thinking-2507-Q4_K_M-GGUF --hf-file jinx-qwen3-30b-a3b-thinking-2507-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo VirtualKimi/Jinx-Qwen3-30B-A3B-Thinking-2507-Q4_K_M-GGUF --hf-file jinx-qwen3-30b-a3b-thinking-2507-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo VirtualKimi/Jinx-Qwen3-30B-A3B-Thinking-2507-Q4_K_M-GGUF --hf-file jinx-qwen3-30b-a3b-thinking-2507-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo VirtualKimi/Jinx-Qwen3-30B-A3B-Thinking-2507-Q4_K_M-GGUF --hf-file jinx-qwen3-30b-a3b-thinking-2507-q4_k_m.gguf -c 2048
```
|
bah63843/blockassist-bc-plump_fast_antelope_1756586829
|
bah63843
| 2025-08-30T20:47:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T20:47:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756586788
|
ggozzy
| 2025-08-30T20:47:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T20:47:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1756586586
|
canoplos112
| 2025-08-30T20:44:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T20:43:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/LURE_5.1-i1-GGUF
|
mradermacher
| 2025-08-30T20:44:49Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:lurepaper/LURE_5.1",
"base_model:quantized:lurepaper/LURE_5.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-30T15:25:05Z |
---
base_model: lurepaper/LURE_5.1
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/lurepaper/LURE_5.1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#LURE_5.1-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/LURE_5.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-IQ1_M.gguf) | i1-IQ1_M | 8.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-IQ2_S.gguf) | i1-IQ2_S | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-IQ2_M.gguf) | i1-IQ2_M | 11.5 | |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-IQ3_M.gguf) | i1-IQ3_M | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/LURE_5.1-i1-GGUF/resolve/main/LURE_5.1.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
sekirr/blockassist-bc-masked_tenacious_whale_1756586537
|
sekirr
| 2025-08-30T20:42:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T20:42:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.