modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
SicariusSicariiStuff/X-Ray_Alpha
|
SicariusSicariiStuff
| 2025-08-24T15:30:21Z | 200 | 94 | null |
[
"safetensors",
"gemma3",
"en",
"dataset:SicariusSicariiStuff/UBW_Tapestries",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"license:gemma",
"region:us"
] | null | 2025-03-22T15:08:42Z |
---
license: gemma
language:
- en
base_model:
- google/gemma-3-4b-it
datasets:
- SicariusSicariiStuff/UBW_Tapestries
---
<div align="center">
<b style="font-size: 40px;">X-Ray_Alpha</b>
</div>
<img src="https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha/resolve/main/Images/X-Ray_Alpha.png" alt="X-Ray_Alpha" style="width: 30%; min-width: 450px; display: block; margin: auto;">
---
<style>
.hf-links, .hf-tldr{
display:flex;justify-content:center;align-items:center;flex-wrap:wrap;
gap:14px;margin:16px 0;
}
.hf-links a, .hf-tldr a{
display:flex;flex-direction:column;align-items:center;justify-content:center;
text-align:center;text-decoration:none;font-weight:700;line-height:1.15;
padding:10px 16px;border-radius:14px;border:2px solid currentColor;
transition:transform .15s ease,box-shadow .15s ease,background-color .15s ease,color .15s ease;
}
.hf-tldr a{
font-size:48px;color:purple;min-width:100%;
}
.hf-tldr a:hover{
transform:translateY(-2px);
background:rgba(128,0,128,.1);
box-shadow:0 8px 22px rgba(128,0,128,.45);
color:#fff;
}
.hf-links a{
font-size:20px;min-width:240px;max-width:280px;
}
.hf-links a .top{font-size:16px;opacity:.9;}
.hf-links a .bottom{font-size:20px;}
.hf-links a.green{color:#64FF00;}
.hf-links a:hover{
transform:translateY(-1px);
background:rgba(255,255,255,0.04);
box-shadow:0 6px 18px rgba(0,0,0,.15), inset 0 0 0 9999px rgba(255,255,255,.02);
}
.hf-links a.green:hover{
background:rgba(100,255,0,.14);
box-shadow:0 8px 20px rgba(100,255,0,.35);
color:#093;
}
/* mobile stacking */
@media (max-width:520px){
.hf-links a{min-width:100%;max-width:100%;}
.hf-tldr a{font-size:36px;}
}
</style>
<div class="hf-tldr">
<a href="https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha#tldr">
Click here for TL;DR
</a>
</div>
---
<div class="hf-links">
<a class="green" href="https://ko-fi.com/sicarius">
<span class="top">Click here</span>
<span class="bottom">to buy me a coffee</span>
</a>
</div>
---
This is a pre-alpha proof-of-concept of **a real fully uncensored vision model** based on Gemma-3 4B instruct.
Why do I say **"real"**? The few vision models we got (qwen, llama 3.2) were "censored," and their fine-tunes were made only to the **text portion** of the model, as training a vision model is a serious pain.
The only actually trained and uncensored vision model I am aware of is [ToriiGate](https://huggingface.co/Minthy/ToriiGate-v0.4-7B); the rest of the vision models are just the stock vision + a fine-tuned LLM.
# Does this even work?
<h2 style="color: green; font-weight: bold; font-size: 80px; text-align: center;">YES!</h2>
---
# Why is this Important?
Having a **fully compliant** vision model is a critical step toward democratizing vision capabilities for various tasks, especially **image tagging**. This is a critical step in both making LORAs for image diffusion models, and for mass tagging images to pretrain a diffusion model.
In other words, having a fully compliant and accurate vision model will allow the open source community to easily train both loras and even pretrain image diffusion models.
Another important task can be content moderation and classification, in various use cases there might not be black and white, where some content that might be considered NSFW by corporations, is allowed, while other content is not, there's nuance. Today's vision models **do not let the users decide**, as they will straight up **refuse** to inference any content that Google \ Some other corporations decided is not to their liking, and therefore these stock models are useless in a lot of cases.
What if someone wants to classify art that includes nudity? Having a naked statue over 1,000 years old displayed in the middle of a city, in a museum, or at the city square is perfectly acceptable, however, a stock vision model will straight up refuse to inference something like that.
It's like in many "sensitive" topics that LLMs will straight up **refuse to answer**, while the content is **publicly available on Wikipedia**. This is an attitude of **cynical patronism**, I say cynical because corporations **take private data to train their models**, and it is "perfectly fine", yet- they serve as the **arbitrators of morality** and indirectly preach to us from a position of a suggested moral superiority. This **gatekeeping hurts innovation badly**, with vision models **especially so**, as the task of **tagging cannot be done by a single person at scale**, but a corporation can.
# How can YOU help?
This is sort of **"Pre-Alpha"**, a proof of concept, I did **A LOT** of shortcuts and "hacking" to make this work, and I would greatly appreciate some help to make it into an accurate and powerful open tool. I am not asking for money, but well-tagged data. I will take the burden and costs of the compute on myself, but I **cannot do tagging** at a large scale by myself.
## Bottom line, I need a lot of well-tagged, diverse data
So:
- If you have well-tagged images
- If you have a link to a well-tagged image dataset
- If you can, and willing to do image tagging
Then please send an email with [DATASET] in the title to:
```
[email protected]
```
As you probably figured by the email address name, this is not my main email, and I expect it to be spammed with junk, so **please use the [DATASET] tag** so I can more easily find the emails of **the good people** who are actually trying to help.
## Please see this dataset repo if you want to help:
[X-Ray_Community_Tagging](https://huggingface.co/datasets/SicariusSicariiStuff/X-Ray_Community_Tagging)
Also, if you don't want to upload it to the repo (although it's encouraged, and you can protect it with a password for privacy), you can still help by linking a google drive
or attach the images with the corrected output via the provided email.
Let's make this happen. We can do it!
---
### TL;DR
- **Fully uncensored and trained** there's no moderation in the vision model, I actually trained it.
- **The 2nd uncensored vision model in the world**, ToriiGate being the first as far as I know, and this one is the second.
- **In-depth descriptions** very detailed, long descriptions.
- The text portion is **somewhat uncensored** as well, I didn't want to butcher and fry it too much, so it remain "smart".
- **NOT perfect** This is a POC that shows that the task can even be done, a lot more work is needed.
- **Good Roleplay & Writing** I used a massive corpus of high quality human (**~60%**) and synthetic data.
---
# How to run it:
## VRAM needed for FP16: 15.9 GB
[Run inference with this](https://github.com/SicariusSicariiStuff/X-Ray_Vision)
# This is a pre-alpha POC (Proof Of Concept)
## Instructions:
clone:
```
git clone https://github.com/SicariusSicariiStuff/X-Ray_Vision.git
cd X-Ray_Vision/
```
Settings up venv, (tested for python 3.11, probably works with 3.10)
```
python3.11 -m venv env
source env/bin/activate
```
Install dependencies
```
pip install git+https://github.com/huggingface/[email protected]
pip install torch
pip install pillow
pip install accelerate
```
# Running inference
Usage:
```
python xRay-Vision.py /path/to/model/ /dir/with/images/
```
The output will print to the console, and export the results into a dir named after your image dir with the suffix "_TXT"
So if you run:
```
python xRay-Vision.py /some_path/x-Ray_model/ /home/images/weird_cats/
```
The results will be exported to:
```
/home/images/weird_cats_TXT/
```
---
<h2 style="color: green; font-weight: bold; font-size: 65px; text-align: center;">Your support = more models</h2>
<a href="https://ko-fi.com/sicarius" style="color: pink; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">My Ko-fi page (Click here)</a>
---
## Citation Information
```
@llm{X-Ray_Alpha,
author = {SicariusSicariiStuff},
title = {X-Ray_Alpha},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha}
}
```
---
## Other stuff
- [X-Ray_Vision](https://github.com/SicariusSicariiStuff/X-Ray_Vision) Easy stand-alone bulk vision inference at scale (inference a folder of images).
- [SLOP_Detector](https://github.com/SicariusSicariiStuff/SLOP_Detector) Nuke GPTisms, with SLOP detector.
- [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) The grand project that started it all.
- [Blog and updates (Archived)](https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates) Some updates, some rambles, sort of a mix between a diary and a blog.
|
shigedon/gen15-chem-expert-1
|
shigedon
| 2025-08-24T15:25:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-24T13:46:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
---
license: unknown
language:
- en
base_model:
- Qwen/Qwen2.5-32B
tags:
- chemistry
pipeline_tag: question-answering
---
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Datasets
[bindingdb](https://www.bindingdb.org/rwd/bind/index.jsp) \
[mmlu](https://huggingface.co/datasets/cais/mmlu) \
[QM9-Dataset](https://huggingface.co/datasets/HR-machine/QM9-Dataset) \
[remapped_USPTO_50K](https://figshare.com/articles/dataset/USPTO_reaction_datasets_remapped_by_LocalMapper/25046471?file=44192528)
|
MKNE/tuto
|
MKNE
| 2025-08-24T15:21:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-24T14:47:36Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Tanda
---
# Tuto
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Tanda` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Tanda",
"lora_weights": "https://huggingface.co/MKNE/tuto/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('MKNE/tuto', weight_name='lora.safetensors')
image = pipeline('Tanda').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/MKNE/tuto/discussions) to add images that show off what you’ve made with this LoRA.
|
kavpro/blockassist-bc-tall_lively_caribou_1756047867
|
kavpro
| 2025-08-24T15:05:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall lively caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T15:05:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall lively caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lautan/blockassist-bc-gentle_patterned_goat_1756046192
|
lautan
| 2025-08-24T15:05:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T15:05:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Laterr/rut5-base-absum-finetuned-summ
|
Laterr
| 2025-08-24T15:03:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:cointegrated/rut5-base-absum",
"base_model:finetune:cointegrated/rut5-base-absum",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2025-08-22T20:13:59Z |
---
library_name: transformers
license: mit
base_model: cointegrated/rut5-base-absum
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: rut5-base-absum-finetuned-summ
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rut5-base-absum-finetuned-summ
This model is a fine-tuned version of [cointegrated/rut5-base-absum](https://huggingface.co/cointegrated/rut5-base-absum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6039
- Rouge1: 97.0122
- Rouge2: 94.5148
- Rougel: 97.0189
- Rougelsum: 96.9668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 15 | 0.9642 | 88.9988 | 70.6048 | 88.8684 | 88.9803 |
| No log | 2.0 | 30 | 0.7765 | 94.6938 | 86.9198 | 94.7219 | 94.6778 |
| No log | 3.0 | 45 | 0.6995 | 96.1002 | 90.9986 | 96.058 | 96.058 |
| No log | 4.0 | 60 | 0.6596 | 96.0421 | 92.2644 | 96.067 | 96.0107 |
| No log | 5.0 | 75 | 0.6294 | 96.5868 | 93.2489 | 96.5836 | 96.5625 |
| No log | 6.0 | 90 | 0.6172 | 96.4605 | 92.827 | 96.4538 | 96.4071 |
| No log | 7.0 | 105 | 0.6091 | 97.0122 | 94.5148 | 97.0189 | 96.9668 |
| 1.0079 | 8.0 | 120 | 0.6039 | 97.0122 | 94.5148 | 97.0189 | 96.9668 |
### Framework versions
- Transformers 4.53.0
- Pytorch 2.2.1+cu118
- Datasets 4.0.0
- Tokenizers 0.21.4
|
liubanlo/blockassist-bc-whiskered_running_alligator_1756047523
|
liubanlo
| 2025-08-24T14:59:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whiskered running alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T14:59:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whiskered running alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kayacrypto/blockassist-bc-thriving_barky_wolf_1756047245
|
kayacrypto
| 2025-08-24T14:56:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving barky wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T14:55:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving barky wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756046998
|
ggozzy
| 2025-08-24T14:51:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T14:51:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
g-assismoraes/Qwen3-4B-Base-fpi-alpha0.8-var-agnews
|
g-assismoraes
| 2025-08-24T14:45:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-24T14:42:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
2hpsatt/blockassist-bc-huge_deft_eagle_1756046653
|
2hpsatt
| 2025-08-24T14:45:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T14:45:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
raniero/test-dpo-3
|
raniero
| 2025-08-24T14:31:56Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-08-24T14:31:47Z |
# DPO Submission
- **task_id**: test-dpo-3
- **base_model**: mistralai/Mistral-7B-Instruct-v0.2
- **SHA256**: 5c12417e0e51165ea2491e6b6c7f6f26f9930df72bd2f208a70c509e8d1d24e4
- **Tags**: LoRA, DPO
|
pidbu/blockassist-bc-whistling_alert_shrew_1756045219
|
pidbu
| 2025-08-24T14:25:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T14:21:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756045305
|
Ferdi3425
| 2025-08-24T14:22:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T14:22:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thanaphatt1/typhoon2.1-gemma3-4b-strategy-prediction-v1
|
thanaphatt1
| 2025-08-24T14:21:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:scb10x/typhoon2.1-gemma3-4b",
"base_model:finetune:scb10x/typhoon2.1-gemma3-4b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-24T12:27:02Z |
---
base_model: scb10x/typhoon2.1-gemma3-4b
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thanaphatt1
- **License:** apache-2.0
- **Finetuned from model :** scb10x/typhoon2.1-gemma3-4b
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1756043044
|
manusiaperahu2012
| 2025-08-24T14:10:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring long tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T14:10:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring long tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756044555
|
Ferdi3425
| 2025-08-24T14:09:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T14:09:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1756042971
|
mang3dd
| 2025-08-24T14:09:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T14:09:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1756042984
|
koloni
| 2025-08-24T14:09:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T14:09:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
raniero/test-start-vali-4
|
raniero
| 2025-08-24T14:07:42Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-08-24T14:07:38Z |
# Submission test-start-vali-4
- Base model: mistralai/Mistral-7B-Instruct-v0.2
- Repo: raniero/test-start-vali-4
- Task: test-start-vali-4
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1756042467
|
kojeklollipop
| 2025-08-24T14:02:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T14:02:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
artemapash/glasiks
|
artemapash
| 2025-08-24T13:58:18Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-24T13:58:18Z |
---
license: apache-2.0
---
|
anarasgarli/blockassist-bc-fast_howling_cockroach_1756043613
|
anarasgarli
| 2025-08-24T13:54:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fast howling cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T13:54:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fast howling cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crie123/yolov3s-finetuned-kyrgyz-plates
|
crie123
| 2025-08-24T13:51:25Z | 0 | 0 | null |
[
"license:gpl-3.0",
"region:us"
] | null | 2025-08-24T13:09:49Z |
---
license: gpl-3.0
---
# YOLOv3s Fine-Tuned on Kyrgyz License Plates
This repository provides a fine-tuned version of **YOLOv3n** trained on a small custom dataset of Kyrgyz vehicle license plates.
The model is intended as a **demonstration of fine-tuning YOLOv3** rather than a production-ready solution.
## Model description
- Base model: [YOLOv3 (Darknet)](https://pjreddie.com/darknet/yolo/)
- Fine-tuned on: [Kyrgyz Car License Plates dataset](https://www.kaggle.com/datasets/pteacher/kyrgyz-car-license-plates) (~478 images, CC0 license)
- Framework: Darknet / PyTorch export
## Intended use
- Educational purposes (transfer learning, YOLO fine-tuning workflow)
- Experimentation with small regional datasets
⚠️ **Note**: The dataset is small (~478 images), so the model may not generalize well outside the training conditions.
For robust license plate detection in production, a larger and more diverse dataset is recommended.
## Training
Below is an example training script used to fine-tune **YOLOv8n** on the Kyrgyz License Plates dataset.
It performs dataset extraction, train/validation split (80/20), YAML generation, and launches training.
```python
import os
import zipfile
import random
import glob
import shutil
from ultralytics import YOLO
# === 1. Extract dataset ===
extract_path = "./datasets/kyrgyz-plates"
zip_path = "./datasets/kyrgyz-car-license-plates.zip"
if os.path.exists(zip_path) and not os.path.exists(extract_path):
with zipfile.ZipFile(zip_path, "r") as z:
z.extractall(extract_path)
# === 2. Split into train/val (80/20) ===
images_src = os.path.join(extract_path, "images")
train_images = os.path.join(extract_path, "train", "images")
train_labels = os.path.join(extract_path, "train", "labels")
val_images = os.path.join(extract_path, "valid", "images")
val_labels = os.path.join(extract_path, "valid", "labels")
for p in (train_images, train_labels, val_images, val_labels):
os.makedirs(p, exist_ok=True)
img_exts = (".jpg", ".jpeg", ".png", ".bmp")
images = [p for p in glob.glob(os.path.join(images_src, "*")) if os.path.splitext(p)[1].lower() in img_exts]
random.seed(42)
random.shuffle(images)
split_idx = int(len(images) * 0.8)
train_list = images[:split_idx]
val_list = images[split_idx:]
def copy_items(lst, dest_img_dir, dest_lbl_dir):
for img_path in lst:
base = os.path.basename(img_path)
shutil.copy2(img_path, os.path.join(dest_img_dir, base))
lbl_src = os.path.splitext(img_path)[0] + ".txt"
if os.path.exists(lbl_src):
shutil.copy2(lbl_src, os.path.join(dest_lbl_dir, os.path.basename(lbl_src)))
copy_items(train_list, train_images, train_labels)
copy_items(val_list, val_images, val_labels)
# === 3. Write data.yaml ===
yaml_path = os.path.join(extract_path, "data.yaml")
with open(yaml_path, "w") as f:
f.write(f"""
path: {extract_path}
train: train/images
val: valid/images
names:
0: plate
""")
# === 4. Train YOLOv8n ===
model = YOLO("yolov8n.pt") # automatically downloads if missing
model.train(
data=yaml_path,
epochs=50,
imgsz=640,
batch=16,
name="yolo-plates-kg"
)
# Locate best weights
best_weights = glob.glob("runs/detect/yolo-plates-kg*/weights/best.pt")[-1]
print("Best weights:", best_weights)
## Training Results
Training metrics and figures (loss curves, mAP, PR/F1 curves) are available in the repository:
- `results.png` – combined training loss and mAP over epochs
You can view or download these images directly from the repository files.
|
Hajisub/blockassist-bc-lumbering_vocal_ibis_1756040559
|
Hajisub
| 2025-08-24T13:42:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering vocal ibis",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T13:42:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering vocal ibis
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nightmedia/QiMing-Holos-Plus-Qwen3-14B-qx6-hi-mlx
|
nightmedia
| 2025-08-24T13:30:01Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"qwen",
"unsloth",
"qiming",
"qiming-holos",
"bagua",
"decision-making",
"strategic-analysis",
"cognitive-architecture",
"chat",
"lora",
"philosophy-driven-ai",
"text-generation",
"conversational",
"zh",
"en",
"base_model:aifeifei798/QiMing-Holos-Plus-Qwen3-14B",
"base_model:adapter:aifeifei798/QiMing-Holos-Plus-Qwen3-14B",
"license:apache-2.0",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-24T12:36:54Z |
---
license: apache-2.0
language:
- zh
- en
tags:
- qwen
- qwen3
- unsloth
- qiming
- qiming-holos
- bagua
- decision-making
- strategic-analysis
- cognitive-architecture
- chat
- lora
- philosophy-driven-ai
- mlx
pipeline_tag: text-generation
library_name: mlx
base_model: aifeifei798/QiMing-Holos-Plus-Qwen3-14B
---
# QiMing-Holos-Plus-Qwen3-14B-qx6-hi-mlx
This model [QiMing-Holos-Plus-Qwen3-14B-qx6-hi-mlx](https://huggingface.co/QiMing-Holos-Plus-Qwen3-14B-qx6-hi-mlx) was
converted to MLX format from [aifeifei798/QiMing-Holos-Plus-Qwen3-14B](https://huggingface.co/aifeifei798/QiMing-Holos-Plus-Qwen3-14B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("QiMing-Holos-Plus-Qwen3-14B-qx6-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1756040530
|
katanyasekolah
| 2025-08-24T13:29:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T13:28:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1756040312
|
kojeklollipop
| 2025-08-24T13:26:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T13:26:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Elizavr/blockassist-bc-reclusive_shaggy_bee_1756041841
|
Elizavr
| 2025-08-24T13:24:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive shaggy bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T13:24:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive shaggy bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
prithivMLmods/Qwen-Image-Sketch-Smudge
|
prithivMLmods
| 2025-08-24T13:23:55Z | 0 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-23T17:50:59Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/1.png
text: 'Sketch Smudge, A medium-sized sketch of a womans face is depicted on a stark white background. The womans head is facing the left side of the frame, her hair cascading over her shoulders. Her eyes are squinted and her lips are pursed. She is wearing a short-sleeved brown shirt with a collar around her neck. Her hair is a dark brown color, and her bangs are a darker shade of brown. Her eyebrows are a lighter shade of black, and she has a slight smile on her face. There are three blue circles surrounding her head, adding a pop of color to the scene.'
- output:
url: images/2.png
text: 'Sketch Smudge, A black and white portrait of a mans head and shoulders. The mans hair is short and wavy. His eyes are open and he is looking to the right. His hair is a light brown color. He is wearing a black turtleneck with a white collar around his neck. The backdrop is a dark gray color and there is a shadow on the wall behind him.'
- output:
url: images/3.png
text: 'Sketch Smudge, a pencil sketch of a womans face is displayed against a light pink background. The womans head is tilted slightly to the left, her eyes are wide open, and her lips are slightly parted. Her hair is pulled back, cascading over her shoulders, framing her face. She is wearing a sleeveless blouse adorned with a pattern of black and white flowers, adding a pop of color to her outfit. Her earrings are adorned with silver earrings. Behind her, a backdrop of brown leaves can be seen, adding depth to the scene.'
base_model: Qwen/Qwen-Image
instance_prompt: Sketch Smudge
license: apache-2.0
---

# Qwen-Image-Sketch-Smudge
<Gallery />
---
# Model description for Qwen-Image-Sketch-Smudge
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 22 & 2850 |
| Epoch | 20 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 30 [HQ Images]
## Data Sources
| Source | Link |
|--------------|-------------------------------------|
| Playground | [playground.com](https://playground.com/) |
| ArtStation | [artstation.com](https://www.artstation.com/) |
| 4K Wallpapers| [4kwallpapers.com](https://4kwallpapers.com/) |
## Best Dimensions & Inference
| **Dimensions** | **Aspect Ratio** | **Recommendation** |
|-----------------|------------------|---------------------------|
| 1472 x 1140 | 4:3 (approx.) | Best |
| 1024 x 1024 | 1:1 | Default |
### Inference Range
- **Recommended Inference Steps:** 35-50
## Setting Up
```python
import torch
from diffusers import DiffusionPipeline
base_model = "Qwen/Qwen-Image"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "prithivMLmods/Qwen-Image-Sketch-Smudge"
trigger_word = "Sketch Smudge"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## Trigger words
You should use `Sketch Smudge` to trigger the image generation.
## Download model
[Download](/prithivMLmods/Qwen-Image-Sketch-Smudge/tree/main) them in the Files & versions tab.
|
kayacrypto/blockassist-bc-thriving_barky_wolf_1756041691
|
kayacrypto
| 2025-08-24T13:23:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving barky wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T13:23:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving barky wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hirundo-io/hallucinations-reduced-gpt-oss-120b
|
hirundo-io
| 2025-08-24T13:22:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-24T12:57:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
prithivMLmods/Qwen-Image-Fragmented-Portraiture
|
prithivMLmods
| 2025-08-24T13:22:34Z | 0 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-24T12:58:26Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/1.png
text: 'Fragmented Portraiture, a close-up shot of a young Asian girls face is seen through a transparent window. The girls head is tilted slightly to the left, and his eyes are wide open. Her hair is a vibrant shade of black, and he is wearing a white collared shirt with a white collar. Her lips are painted a bright pink, adding a pop of color to the scene. The backdrop is a stark white, creating a stark contrast to the boys body. The window is made up of thin, light-colored wooden blinds, adding depth to the image.'
- output:
url: images/2.png
text: 'Fragmented Portraiture, Captured in a black and white collage, a womans face is featured prominently in the center of the collage. The womans eyes are wide open, and her lips are pursed. Her hair is long and cascades over her shoulders. The background is a stark white, and the womans hair is a vibrant shade of brown, adding a pop of color to the composition.'
- output:
url: images/3.png
text: 'Fragmented Portraiture, Captured in a black and white monochrome, a close-up shot of a womans face is visible through a series of white vertical blinds. The womans eyes are wide open, and her lips are pursed. Her hair is long and cascades down to her shoulders, framing her face. The blinds are pulled up, adding a touch of depth to the scene. The background is a stark white, creating a stark contrast to the womans features.'
base_model: Qwen/Qwen-Image
instance_prompt: Fragmented Portraiture
license: apache-2.0
---

# Qwen-Image-Fragmented-Portraiture
<Gallery />
---
# Model description for Qwen-Image-Fragmented-Portraiture
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 27 & 3050 |
| Epoch | 20 | Save Every N Epochs | 2 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 17 [HQ Images]
## Data Sources
| Source | Link |
|--------------|-------------------------------------|
| Playground | [playground.com](https://playground.com/) |
| ArtStation | [artstation.com](https://www.artstation.com/) |
| 4K Wallpapers| [4kwallpapers.com](https://4kwallpapers.com/) |
## Best Dimensions & Inference
| **Dimensions** | **Aspect Ratio** | **Recommendation** |
|-----------------|------------------|---------------------------|
| 1472 x 1140 | 4:3 (approx.) | Best |
| 1024 x 1024 | 1:1 | Default |
### Inference Range
- **Recommended Inference Steps:** 35-50
## Setting Up
```python
import torch
from diffusers import DiffusionPipeline
base_model = "Qwen/Qwen-Image"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "prithivMLmods/Qwen-Image-Fragmented-Portraiture"
trigger_word = "Fragmented Portraiture"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## Trigger words
You should use `Fragmented Portraiture` to trigger the image generation.
## Download model
[Download](/prithivMLmods/Qwen-Image-Fragmented-Portraiture/tree/main) them in the Files & versions tab.
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756041657
|
Ferdi3425
| 2025-08-24T13:21:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T13:21:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1756039845
|
elmenbillion
| 2025-08-24T13:18:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T13:18:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
poult/Mamba-Unet-ECG-classification
|
poult
| 2025-08-24T13:13:00Z | 0 | 0 | null |
[
"tensorboard",
"region:us"
] | null | 2025-08-24T13:11:48Z |
# **MULTI-LABEL ABNORMALITY CLASSIFICATION FROM 12-LEAD ECG USING A 2D RESIDUAL U-NET**
This is an official repo of the paper "**MULTI-LABEL ABNORMALITY CLASSIFICATION FROM 12-LEAD ECG USING A 2D RESIDUAL U-NET**," which is accepted to ICASSP 2024.
**Abstract**:This paper proposes a two-dimensional (2D) deep neural network (DNN) model for the electrocardiogram (ECG) abnormality classification, which effectively utilizes the inter and intra-lead information comprised in the 12-lead ECG.
The proposed model is designed using a stack of residual U-shaped (ResU) blocks so that it can effectively capture ECG features in a multiscale.
The 2D features extracted by the ResU block are down-mixed to 1D features using a lead combiner block designed to merge features of the lead domain into both the time and channel domain.
Through experiments, we confirm that our model outperforms other state-of-the-art models in various metrics.
## Update:
* **2023.12.14** Upload codes
## Requirements
This repo is tested with Ubuntu 22.04, PyTorch 2.0.1, Python3.10, and CUDA11.7. For package dependencies, you can install them by:
```
pip install -r requirements.txt
```
## Getting started
1. Install the necessary libraries.
2. Download the PhysioNet Challenge 2021 database and place it in '../Dataset/' folder.
```
├── 📦 ResUNet_LC
│ └── 📂 dataset
│ └── 📜 train_dataset.csv
│ └── 📜 test_dataset.csv
│ └── ...
└── 📦 Dataset
└── 📂 physionet_challenge_dataset
└── 📂 physionet.org
└── ...
```
If you want to get csv file, please contact us.
3. Run [train_interface.py](https://github.com/seorim0/ResUNet-LC/blob/main/train_interface.py)
* You can simply change any parameter settings if you need to adjust them. ([options.py](https://github.com/seorim0/ResUNet-LC/blob/main/options.py))
## Results




## Reference
**Will Two Do? Varying Dimensions in Electrocardiography: The PhysioNet/Computing in Cardiology Challenge 2021**
Matthew Reyna, Nadi Sadr, Annie Gu, Erick Andres Perez Alday, Chengyu Liu, Salman Seyedi, Amit Shah, and Gari Clifford
[[paper]](https://physionet.org/content/challenge-2021/1.0.3/)
**Automatic diagnosis of the 12-lead ECG usinga deep neural network**
Antônio H. Ribeiro, et al.
[[paper]](https://www.nature.com/articles/s41467-020-15432-4) [[code]](https://github.com/antonior92/automatic-ecg-diagnosis)
**A multi-view multi-scale neural network for multi-label ECG classification**
Shunxiang Yang, Cheng Lian, Zhigang Zeng, Bingrong Xu, Junbin Zang, and Zhidong Zhang
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10021962) [[code]](https://github.com/ysxGitHub/MVMS-net)
**Classification of ECG using ensemble of residual CNNs with attention mechanism**
Petr Nejedly, Adam Ivora, Radovan Smisek, Ivo Viscor, Zuzana Koscova, Pavel Jurak, and Filip Plesinger
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9662723) [[code]](https://moody-challenge.physionet.org/2021/)
## Contact
Please get in touch with us if you have any questions or suggestions.
E-mail: [email protected] (Seorim Hwang) / [email protected] (Jaebine Cha)
|
pqiqgavyhahah134/blockassist-bc-endangered_grazing_opossum_1756040299
|
pqiqgavyhahah134
| 2025-08-24T13:07:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"endangered grazing opossum",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T13:07:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- endangered grazing opossum
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nialgalxe/blockassist-bc-sprightly_long_squirrel_1756040613
|
nialgalxe
| 2025-08-24T13:06:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly long squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T13:06:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly long squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1756039136
|
capungmerah627
| 2025-08-24T13:05:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T13:05:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muralikrishnaraparthi/Mistral-7B-HDFC-Finance-RAFT
|
Muralikrishnaraparthi
| 2025-08-24T12:59:57Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] |
text-generation
| 2025-08-24T07:01:57Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.1
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.2.dev0
|
anubhav00987/blockassist-bc-skilled_mighty_monkey_1756038382
|
anubhav00987
| 2025-08-24T12:59:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"skilled mighty monkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T12:59:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- skilled mighty monkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
anarasgarli/blockassist-bc-fast_howling_cockroach_1756039916
|
anarasgarli
| 2025-08-24T12:52:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fast howling cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T12:52:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fast howling cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Hamdi098/Hamditv
|
Hamdi098
| 2025-08-24T12:51:36Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-24T12:51:36Z |
---
license: apache-2.0
---
|
KaiZe623/colab-training
|
KaiZe623
| 2025-08-24T12:48:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-24T11:50:57Z |
---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
D1zzYzz/GRIT-BOOLQ-QLORA-llama-3.2-3B-Energy-0.9
|
D1zzYzz
| 2025-08-24T12:47:23Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"alpaca",
"grit",
"Qlora",
"instruction-tuning",
"fine-tuned",
"text-generation",
"en",
"dataset:google/boolq",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:adapter:meta-llama/Llama-3.2-3B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-24T12:47:17Z |
---
tags:
- llama
- alpaca
- grit
- Qlora
- instruction-tuning
- fine-tuned
base_model: meta-llama/Llama-3.2-3B
library_name: peft
license: apache-2.0
datasets:
- google/boolq
language:
- en
pipeline_tag: text-generation
---
# meta-llama/Llama-3.2-3B Fine-tuned with GRIT and Lora
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) using the **GRIT** (Geometric Reprojection Instruction Tuning) algorithm and **LoRA** on the [google/boolq dataset](https://huggingface.co/datasets/google/boolq).
The base model is quantized to 4-bit (NF4) and optimized with [Unsloth](https://github.com/unslothai/unsloth) to enable efficient fine-tuning.
## 🚀 Training Details
### GRIT Algorithm
- **K-FAC Updates**: Every 10 steps (adaptive) for second-order preconditioning.
- **Neural Reprojection**: Every 20 steps (adaptive) for rank optimization.
- **Rank Adaptation**: Enabled (Threshold: 0.9, Min Rank: 4).
- **Optimized LoRA Modules**: ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj']
### Fine-tuning Configuration
- **Base Model**: meta-llama/Llama-3.2-3B
- **Quantization**: 4-bit (NF4) with bf16 compute.
- **LoRA Rank**: 16
- **LoRA Alpha**: 32
- **Batch Size**: 8 (per device)
- **Gradient Accumulation**: 4 (Effective batch = 32)
- **Learning Rate**: 2.0e-05
- **Precision**: bf16 mixed precision
- **Sequence Length**: 1024 tokens
- **Gradient Checkpointing**: Enabled
### Performance Improvements
- ✅ **Faster Convergence**: K-FAC preconditioning aligns updates with curvature.
- ✅ **Adaptive Rank**: Dynamically prunes LoRA rank to improve parameter efficiency.
## 📊 Training Metrics
- **Total Steps**: 295
- **Final Loss**: 0.3212439351162668
- **Trainable Params**: 24,313,856
## 📝 Algorithm Details
- **K-FAC Preconditioning** (Natural Gradient) and **Neural Reprojection** as per GRIT method.
- **Memory Efficient**: Covariance matrices on CPU to reduce GPU load.
## 🏆 Results
In benchmark comparisons, GRIT has shown **faster convergence and better stability** than standard LoRA or fine-tuning, making it well-suited for efficient single-epoch training. The use of Unsloth further accelerates this process.
## 📝 Citation
If you use this model, please cite the original GRIT paper and:
```bibtex
@misc{grit-lora-Llama-3.2-3B-boolq},
title={ meta-llama/Llama-3.2-3B Fine-tuned with GRIT on google/boolq },
author={D1zzYzz},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/D1zzYzz/GRIT-BOOLQ-QLORA-llama-3.2-3B-Energy-0.9}
}
```
## ⚖️ License
This model inherits the Apache 2.0 license.
|
esi777/blockassist-bc-camouflaged_trotting_eel_1756039484
|
esi777
| 2025-08-24T12:46:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T12:45:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
uHBECTOP/InstaGirlHighNz
|
uHBECTOP
| 2025-08-24T12:44:41Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:TheRaf7/ultra-real-wan2.2",
"base_model:adapter:TheRaf7/ultra-real-wan2.2",
"region:us"
] |
text-to-image
| 2025-08-24T12:43:26Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/ComfyUI_00015_.webp
text: '-'
base_model: TheRaf7/ultra-real-wan2.2
instance_prompt: Instagirl
---
# INGRL
<Gallery />
## Trigger words
You should use `Instagirl` to trigger the image generation.
## Download model
[Download](/uHBECTOP/InstaGirlHighNz/tree/main) them in the Files & versions tab.
|
dgambettaphd/M_mis_run1_gen7_WXS_doc1000_synt64_lr1e-04_acm_FRESH
|
dgambettaphd
| 2025-08-24T12:38:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-24T12:38:07Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sparsh2306/cxfgbtbh
|
sparsh2306
| 2025-08-24T12:36:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-24T12:36:30Z |
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
https://www.cucei.udg.mx/carreras/fisica/sites/default/files/webform/52iriateefono_m.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/52riberiatelefonocomo_llamar_a_iberia_en_mexico.pdf
https://www.cucei.udg.mx/carreras/alimentos/sites/default/files/webform/tm_ai_rntae_tmmex_telefonocomo_puedo_hablar_con_un_agente_de_air_france_.pdf
|
Elizavr/blockassist-bc-reclusive_shaggy_bee_1756038743
|
Elizavr
| 2025-08-24T12:33:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive shaggy bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T12:32:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive shaggy bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
linhdzqua148/opus-mt-ja-en-railway-announcements-148
|
linhdzqua148
| 2025-08-24T12:28:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-24T12:28:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Azzindani/ID_REG_MD_KG
|
Azzindani
| 2025-08-24T12:28:40Z | 0 | 0 | null |
[
"legal",
"indonesia",
"regulations",
"knowledge-graph",
"rag",
"id",
"license:apache-2.0",
"region:us"
] | null | 2025-08-24T12:27:35Z |
---
license: apache-2.0
language:
- id
tags:
- legal
- indonesia
- regulations
- knowledge-graph
- rag
task_categories:
- text-retrieval
- question-answering
size_categories:
- 100K<n<1M
---
# Indonesian Legal Regulations Dataset with Knowledge Graph Features
This dataset contains Indonesian legal regulations enhanced with knowledge graph features for advanced RAG systems.
## Dataset Statistics
- **Total Records**: 100
- **Knowledge Graph Features**: 7
- **Original Features**: Regulation metadata, content, embeddings, TF-IDF vectors
- **Enhanced Features**: Entity extraction, concept clustering, semantic relationships
## Knowledge Graph Features
### Core KG Features
- `kg_entities_json`: Extracted legal entities and their types (JSON serialized)
- `kg_entity_count`: Number of unique entities per document
- `kg_concept_clusters_json`: Concept cluster assignments (JSON serialized)
- `kg_connectivity`: Knowledge graph connectivity score (0-1)
### Scoring Features
- `authority_score`: Legal authority hierarchy score (0-1)
- `temporal_score`: Temporal relevance score (0-1)
- `legal_richness`: Legal content richness score (0-1)
- `cross_ref_strength`: Cross-reference connectivity strength (0-1)
- `completeness_score`: Information completeness score (0-1)
## Usage
```python
from datasets import load_dataset
import json
# Load dataset
dataset = load_dataset("Azzindani/ID_REG_MD_KG", split="train")
# Access KG features
record = dataset[0]
entities = json.loads(record['kg_entities_json'])
clusters = json.loads(record['kg_concept_clusters_json'])
connectivity = record['kg_connectivity']
```
## Applications
- Legal document retrieval systems
- Regulation question-answering
- Legal concept relationship analysis
- Authority-aware legal search
- Temporal legal document analysis
## License
Apache 2.0
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756038324
|
Ferdi3425
| 2025-08-24T12:25:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T12:25:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1756036392
|
katanyasekolah
| 2025-08-24T12:21:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T12:21:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FilenkoTA/blockassist-bc-dappled_curious_impala_1756036277
|
FilenkoTA
| 2025-08-24T12:16:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dappled curious impala",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T12:16:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dappled curious impala
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sanjudhrua0/blockassist-bc-tall_omnivorous_coral_1756037390
|
sanjudhrua0
| 2025-08-24T12:12:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall omnivorous coral",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T12:11:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall omnivorous coral
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rakushaking/llm-jp-3-13b-it
|
Rakushaking
| 2025-08-24T12:06:28Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"unsloth",
"lora",
"sft",
"trl",
"torch",
"datasets",
"bitsandbytes",
"4bit",
"gguf",
"q8_0",
"colab",
"chat-template",
"fastlanguagemodel",
"en",
"base_model:llm-jp/llm-jp-3-13b",
"base_model:adapter:llm-jp/llm-jp-3-13b",
"license:apache-2.0",
"region:us"
] | null | 2024-11-26T04:14:00Z |
---
base_model: llm-jp/llm-jp-3-13b
tags:
- unsloth
- peft
- lora
- sft
- trl
- torch
- datasets
- bitsandbytes
- 4bit
- gguf
- q8_0
- colab
- chat-template
- fastlanguagemodel
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Rakushaking
- **License:** apache-2.0
- **Finetuned from model :** llm-jp/llm-jp-3-13b
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
llm-jp-3-13bをインストラクションチューニングして作成した日本語対応モデルです。
# Model Details
llm-jp-3-13b-it-d11_loraは、大規模言語モデルllm-jp-3-13bをベースに、指示応答タスク(Instruction Tuning)を通じて特定の日本語タスクに最適化したモデルです。
具体的には、以下の特徴を持っています:
応答の正確性向上: 日本語での指示に対する的確な応答
タスク指向の最適化: 特定のユースケースに対応
# Model Sources
Repository: llm-jp-3-13b-it
Base Model: llm-jp/llm-jp-3-13b
# Direct Usecase
このモデルは、日本語における質問応答や、タスク指向のアシスタントとして使用できます。
主な用途:
教育分野での質問応答
企業内での業務サポート
日本語NLP研究への活用
Downstream Use
特定の日本語タスク(例: 意味分類、要約、対話生成)への活用が可能です。
# Out-of-Scope Use
以下の用途には向いていません:
悪意ある利用(例: 偏見や差別を助長する応答生成)
正確性が求められるクリティカルな決定の支援
Bias, Risks, and Limitations
# Limitations
日本語以外のタスクでは性能が劣る場合があります。
トレーニングデータ由来のバイアスが含まれている可能性があります。
# Recommendations
利用者はモデルが完全に正確ではないことを理解し、応答内容の確認を推奨します。
# 推論手法①(LLM生成確認)
``` python
以下のコードでモデルをロードして使用できます。
!pip install -U bitsandbytes
!pip install -U transformers
!pip install -U accelerate
!pip install -U datasets
# notebookでインタラクティブな表示を可能とする(ただし、うまく動かない場合あり)
!pip install ipywidgets --upgrade
# Hugging Faceで取得したTokenをこちらに貼る。
HF_TOKEN = <your key>
# モデルのID
model_name = "Rakushaking/llm-jp-3-13b-finetune-it"
# QLoRA config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=False,
)
# Load model
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map="auto",
token = HF_TOKEN
)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, token = HF_TOKEN)
prompt = f"""### 指示:<your question>
### 回答:
tokenized_input = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=False,
repetition_penalty=1.2
)[0]
output = tokenizer.decode(outputs[tokenized_input.size(1):], skip_special_tokens=True)
print(output)
```
# 推論手法②(RAG拡張生成)
## アーキテクチャ

``` python
#自分の作成したモデルのIDをこちらに貼る。
model_name = "Rakushaking/llm-jp-3-13b-it"
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
)
import torch
from tqdm import tqdm
import json
#QLoRA config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=False,
)
#Load model
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map="auto",
token = HF_TOKEN
)
#Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, token = HF_TOKEN)
import langchain
from langchain.embeddings import HuggingFaceEmbeddings
from llama_index.core import ServiceContext, SQLDatabase, VectorStoreIndex
from typing import Any, List
#埋め込みクラスにqueryを付加
class HuggingFaceQueryEmbeddings(HuggingFaceEmbeddings):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
def embed_documents(self, texts: List[str]) -> List[List[float]]:
return super().embed_documents(["query: " + text for text in texts])
def embed_query(self, text: str) -> List[float]:
return super().embed_query("query: " + text)
#ベクトル化する準備
embedding = langchain.embeddings.HuggingFaceEmbeddings(
model_name="intfloat/multilingual-e5-base",
#model_kwargs=model_kwargs,
#encode_kwargs=encode_kwargs
)
from transformers import pipeline
from langchain.llms import HuggingFacePipeline
#パイプラインの準備
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512
)
from langchain_community.document_loaders import JSONLoader
loader = JSONLoader(
file_path="<YOUR DATABASE>",
jq_schema=".summary",
text_content=False,
json_lines=True, # JSONL形式でファイルを読み込む
)
docs = loader.load()
print(docs[0])
import langchain.text_splitter
#読込した内容を分割する
text_splitter = langchain.text_splitter.RecursiveCharacterTextSplitter(
chunk_size=1024,
chunk_overlap=0,
)
docs = text_splitter.split_documents(docs)
#FAISS indexの作成
from langchain.vectorstores import FAISS
vectorstore = FAISS.from_documents(docs, embedding)
retriever = vectorstore.as_retriever(search_type="similarity", search_kwargs={"k": 2})
from langchain.prompts import ChatPromptTemplate
from langchain_community.vectorstores import Chroma
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
#Modified chain definition:
chain = (
RunnableLambda(lambda x: {"context": format_docs(retriever.get_relevant_documents(x)), "query": x})
| prompt
| RunnableLambda(lambda x: x.to_string()) # Convert StringPromptValue to string
| pipe
| RunnableLambda(lambda x: x[0]["generated_text"] if isinstance(x, list) and x else x["generated_text"]) # Extract generated text from the output of the pipe
| StrOutputParser()
)
#Invoke the chain with the question string directly, not a dictionary
res = chain.invoke(question)
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
def RAG(user_prompt):
#プロンプトを準備
template = """
<bos><start_of_turn>###指示
次の文脈を参考にして回答してください。
ただし、参考情報が質問に関係ない場合は、参考情報を無視して回答を生成してください。
また、回答の際は、同じ単語や話題を繰り返さないでください。
{context}
<end_of_turn><start_of_turn>###質問
{query}
<end_of_turn><start_of_turn>###回答
"""
prompt = langchain.prompts.PromptTemplate.from_template(template) # Corrected indentation
#Modified chain definition:
chain = (
RunnableLambda(lambda x: {"context": format_docs(retriever.get_relevant_documents(x)), "query": x})
| prompt
| RunnableLambda(lambda x: x.to_string()) # Convert StringPromptValue to string
| pipe
| RunnableLambda(lambda x: x[0]["generated_text"] if isinstance(x, list) and x else x["generated_text"]) # Extract generated text from the output of the pipe
| StrOutputParser()
)
res = chain.invoke(user_prompt)
result = res.split("###回答")[-1].replace("\n", "")
return result
def llm(user_prompt):
prompt = f"""### 指示
{user_prompt}
#回答:
"""
tokenized_input = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
tokenized_input,
max_new_tokens=512,
do_sample=False,
use_cache = True,
repetition_penalty=1.2,
temperature=0.1,
pad_token_id=tokenizer.eos_token_id
)[0]
output = tokenizer.decode(outputs[tokenized_input.size(1):], skip_special_tokens=True)
return output
#results.append({"task_id": data["task_id"], "input": input, "output": output})
def judge_score_llm(user_prompt):
scores = [item[1] for item in vectorstore.similarity_search_with_score(user_prompt)]
max_score = max(scores)
if max_score < 0.20:
print("###RAG####")
return RAG(user_prompt),
else:
print("####LLMより回答生成")
return llm(user_prompt),
#return llm(user_prompt),print("####LLMより回答生成")
def judge_score_llm(user_prompt):
scores = [item[1] for item in vectorstore.similarity_search_with_score(user_prompt)]
avg_score = sum(scores) / len(scores) if scores else 0 # スコアの平均を計算
if avg_score < 0.25: # 平均スコアが 0.20 以下の場合
print("###RAG####")
return RAG(user_prompt)
else:
print("####LLMより回答生成")
return llm(user_prompt)
#データセットの読み込み。
datasets = []
with open("./elyza-tasks-100-TV_0.jsonl", "r") as f:
item = ""
for line in f:
line = line.strip()
item += line
if item.endswith("}"):
datasets.append(json.loads(item))
item = ""
”””python
|
lautan/blockassist-bc-gentle_patterned_goat_1756035363
|
lautan
| 2025-08-24T12:03:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T12:03:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
uniswap/blockassist-bc-soaring_rough_bear_1756036206
|
uniswap
| 2025-08-24T11:51:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soaring rough bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T11:50:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soaring rough bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756035069
|
Sayemahsjn
| 2025-08-24T11:50:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T11:50:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Preet-Randhawa-viral-video-Clip/New.full.videos.Preet.Randhawa.Viral.Video.Official.Tutorial
|
Preet-Randhawa-viral-video-Clip
| 2025-08-24T11:50:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-24T11:49:49Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/mdfprj9k?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
casilat5124/blockassist-bc-pudgy_mimic_goat_1756035570
|
casilat5124
| 2025-08-24T11:48:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pudgy mimic goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T11:48:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pudgy mimic goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wolf99831/blockassist-bc-climbing_climbing_barracuda_1756034941
|
wolf99831
| 2025-08-24T11:38:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"climbing climbing barracuda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T11:38:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- climbing climbing barracuda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Watch-Videos-Lorient-Rennes-Direct-Video/Watch.Video.Lorient-Rennes.En.Direct.Streaming.Gratuit.tv.Official
|
Watch-Videos-Lorient-Rennes-Direct-Video
| 2025-08-24T11:37:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-24T11:36:08Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/mrmpsap62?jk" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
narkomax/blockassist-bc-loud_sly_ape_1756034322
|
narkomax
| 2025-08-24T11:36:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud sly ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T11:36:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud sly ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arceina/blockassist-bc-nimble_fishy_cheetah_1756035296
|
arceina
| 2025-08-24T11:35:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"nimble fishy cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T11:35:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- nimble fishy cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
joanna302/Qwen3-8B-Base_pag_SFT_0.0002
|
joanna302
| 2025-08-24T11:31:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-24T10:31:24Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_pag_SFT_0.0002
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for Qwen3-8B-Base_pag_SFT_0.0002
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_pag_SFT_0.0002", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_pag_SFT_0.0002/runs/1jpcic0m)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
isogen/Mistral-Small-Instruct-2409-exl3-3bpw
|
isogen
| 2025-08-24T11:20:43Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"base_model:mistralai/Mistral-Small-Instruct-2409",
"base_model:quantized:mistralai/Mistral-Small-Instruct-2409",
"3-bit",
"exl3",
"region:us"
] | null | 2025-08-24T09:18:50Z |
---
base_model: mistralai/Mistral-Small-Instruct-2409
---
[EXL3](https://github.com/turboderp-org/exllamav3) quantization of [Mistral-Small-Instruct-2409](https://huggingface.co/mistralai/Mistral-Small-Instruct-2409), 3 bits per weight.
### HumanEval (argmax)
| Model | Q4 | Q6 | Q8 | FP16 |
| ---------------------------------------------------------------------------------------------------------------- | ---- | ---- | ---- | ---- |
| [Mistral-Small-Instruct-2409-exl3-3bpw](https://huggingface.co/isogen/Mistral-Small-Instruct-2409-exl3-3bpw) | 76.8 | 74.4 | 76.2 | 75.6 |
| [Mistral-Small-Instruct-2409-exl3-3.5bpw](https://huggingface.co/isogen/Mistral-Small-Instruct-2409-exl3-3.5bpw) | 73.8 | 75.6 | 75.0 | 75.6 |
| [Mistral-Small-Instruct-2409-exl3-4bpw](https://huggingface.co/isogen/Mistral-Small-Instruct-2409-exl3-4bpw) | 78.7 | 78.7 | 79.3 | 79.3 |
| [Mistral-Nemo-Instruct-2407-exl3-4bpw](https://huggingface.co/isogen/Mistral-Nemo-Instruct-2407-exl3-4bpw) | 74.4 | 72.6 | 73.2 | 72.0 |
| [Mistral-Nemo-Instruct-2407-exl3-6bpw](https://huggingface.co/isogen/Mistral-Nemo-Instruct-2407-exl3-6bpw) | 70.7 | 69.5 | 69.5 | 68.9 |
|
ashishlmpmishra/ryalatina
|
ashishlmpmishra
| 2025-08-24T11:18:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-24T11:18:26Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/ryalatina_002500_00_20250823143808.png
text: RYALATINA
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: RYALATINA
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev/blob/main/LICENSE.md
---
# RYALATINA
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `RYALATINA` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
kavpro/blockassist-bc-tall_lively_caribou_1756034183
|
kavpro
| 2025-08-24T11:17:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall lively caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T11:17:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall lively caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
moscowx21/blockassist-bc-extinct_bipedal_clam_1756033968
|
moscowx21
| 2025-08-24T11:13:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"extinct bipedal clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T11:13:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- extinct bipedal clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1756032316
|
koloni
| 2025-08-24T11:13:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T11:13:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fujiantiiazhraa/blockassist-bc-marine_robust_bee_1756032368
|
fujiantiiazhraa
| 2025-08-24T11:10:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"marine robust bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T11:10:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- marine robust bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
esi777/blockassist-bc-camouflaged_trotting_eel_1756033115
|
esi777
| 2025-08-24T11:00:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:59:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756032872
|
liukevin666
| 2025-08-24T10:56:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:55:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1756031274
|
capungmerah627
| 2025-08-24T10:54:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:54:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1756031056
|
indoempatnol
| 2025-08-24T10:53:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:53:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756032134
|
Ferdi3425
| 2025-08-24T10:42:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:42:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
moscowx21/blockassist-bc-extinct_bipedal_clam_1756032111
|
moscowx21
| 2025-08-24T10:42:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"extinct bipedal clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:42:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- extinct bipedal clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gensynw/blockassist-bc-foxy_reclusive_bear_1756032048
|
gensynw
| 2025-08-24T10:41:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"foxy reclusive bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:40:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- foxy reclusive bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gensynw/blockassist-bc-prowling_aquatic_baboon_1756031967
|
gensynw
| 2025-08-24T10:40:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prowling aquatic baboon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:39:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prowling aquatic baboon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Premprakash3126/phi-2-fault-detector
|
Premprakash3126
| 2025-08-24T10:38:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"endpoints_compatible",
"region:us"
] | null | 2025-08-24T10:38:40Z |
---
base_model: microsoft/phi-2
library_name: transformers
model_name: phi-2-fault-detector
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for phi-2-fault-detector
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Premprakash3126/phi-2-fault-detector", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1756030095
|
kojeklollipop
| 2025-08-24T10:35:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:35:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zenqqq/blockassist-bc-restless_reptilian_caterpillar_1756031609
|
zenqqq
| 2025-08-24T10:34:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless reptilian caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:34:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- restless reptilian caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
max256353/blockassist-bc-fanged_whistling_weasel_1756031557
|
max256353
| 2025-08-24T10:33:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fanged whistling weasel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:33:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fanged whistling weasel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Amarjitkr/brat-gpt-oss-20b-lora-adapters
|
Amarjitkr
| 2025-08-24T10:26:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-24T10:24:55Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Amarjitkr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
|
lautan/blockassist-bc-gentle_patterned_goat_1756029384
|
lautan
| 2025-08-24T10:22:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:22:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gianghaidang81/blockassist-bc-fleecy_beaked_hawk_1756030111
|
gianghaidang81
| 2025-08-24T10:18:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fleecy beaked hawk",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:18:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fleecy beaked hawk
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Amarjitkr/brat-gpt-oss-20b-merged-weights
|
Amarjitkr
| 2025-08-24T10:16:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"text-generation-inference",
"conversational",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-24T10:11:28Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- gpt_oss
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Amarjitkr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
|
nema122/blockassist-bc-robust_fluffy_ram_1756030195
|
nema122
| 2025-08-24T10:11:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"robust fluffy ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:11:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- robust fluffy ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1756028562
|
calegpedia
| 2025-08-24T10:09:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:09:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fopppyu/blockassist-bc-keen_invisible_kingfisher_1756029885
|
fopppyu
| 2025-08-24T10:05:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen invisible kingfisher",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:04:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen invisible kingfisher
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kavpro/blockassist-bc-tall_lively_caribou_1756029584
|
kavpro
| 2025-08-24T10:01:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall lively caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T10:00:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall lively caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
henrybrown2988/blockassist-bc-gregarious_fishy_tamarin_1756028893
|
henrybrown2988
| 2025-08-24T09:57:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gregarious fishy tamarin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T09:57:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gregarious fishy tamarin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thaddeusk/llama-3.1-8b-rp-quark-onnx-npu
|
thaddeusk
| 2025-08-24T09:55:05Z | 0 | 0 | null |
[
"onnx",
"base_model:tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b",
"base_model:quantized:tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b",
"license:llama3.1",
"region:us"
] | null | 2025-08-24T09:52:01Z |
---
license: llama3.1
base_model:
- tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b
---
|
vdaular/f5-tts-es
|
vdaular
| 2025-08-24T09:43:07Z | 0 | 0 |
f5-tts
|
[
"f5-tts",
"es",
"base_model:SWivid/F5-TTS",
"base_model:finetune:SWivid/F5-TTS",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-08-24T09:43:07Z |
---
license: cc-by-nc-4.0
library_name: f5-tts
language:
- es
base_model:
- SWivid/F5-TTS
---
# [GitHub](https://github.com/jpgallegoar/Spanish-F5)
# F5-TTS Spanish Language Model
## Overview
The F5-TTS model is finetuned specifically for Spanish language speech synthesis. This project aims to deliver high-quality, regionally diverse speech synthesis capabilities for Spanish speakers.
## License
This model is released under the CC0-1.0 license, which allows for free usage, modification, and distribution.
## Datasets
The following datasets were used for training:
- [Voxpopuli Dataset](https://huggingface.co/datasets/facebook/voxpopuli), with mainly Peninsular Spain accents
- Crowdsourced high-quality Spanish speech data:
- Argentinian Spanish
- Chilean Spanish
- Colombian Spanish
- Peruvian Spanish
- Puerto Rican Spanish
- Venezuelan Spanish
- TEDx Spanish Corpus
Additional sources:
- [Crowdsourced high-quality Argentinian Spanish speech data set](https://www.openslr.org/61/)
- [Crowdsourced high-quality Chilean Spanish speech data set](https://www.openslr.org/71/)
- [Crowdsourced high-quality Colombian Spanish speech data set](https://www.openslr.org/72/)
- [Crowdsourced high-quality Peruvian Spanish speech data set](https://www.openslr.org/73/)
- [Crowdsourced high-quality Puerto Rico Spanish speech data set](https://www.openslr.org/74/)
- [Crowdsourced high-quality Venezuelan Spanish speech data set](https://www.openslr.org/75/)
- - [TEDx Spanish Corpus](https://www.openslr.org/67/)
## Model Information
**Base Model:** SWivid/F5-TTS
**Total Training Duration:** 218 hours of audio
**Training Configuration:**
- Batch Size: 3200
- Max Samples: 64
- Training Steps: 1,200,000
## Usage Instructions
### Method 0: HuggingFace space (https://huggingface.co/spaces/jpgallegoar/Spanish-F5)
### Method 1: Manual Model Replacement
1. **Run the F5-TTS Application:** Start the F5-TTS application and observe the terminal for output indicating the model file path. It should appear similar to:
```
model : C:\Users\thega\.cache\huggingface\hub\models--SWivid--F5-TTS\snapshots\995ff41929c08ff968786b448a384330438b5cb6\F5TTS_Base\model_1200000.safetensors
```
2. **Replace the Model File:**
- Navigate to the displayed file location.
- Rename the existing model file to `model_1200000.safetensors.bak`.
- Download `model_1200000.safetensors` from this repository and save it to the same location.
3. **Restart the Application:** Relaunch the F5-TTS application to load the updated model.
### Alternative Methods
- **GitHub Repository:** Clone the [Spanish-F5 repository](https://github.com/jpgallegoar/Spanish-F5/) and follow the provided installation instructions.
- **Google Colab:** Use the model via [Google Colab](https://colab.research.google.com/drive/1mm4NAlZVZq2_oL6ftijY64-PeEYwnqG1?usp=sharing).
- Runtime -> Change Runtime Type -> T4 GPU
- Runtime -> Run all
- Click on the link shown in "Running on public URL: https://link.gradio.live" when it loads
- **Jupyter Notebook:** Run the model through the `Spanish_F5.ipynb` notebook.
## Contributions and Recommendations
This model may benefit from further fine-tuning to enhance its performance across different Spanish dialects. Contributions from the community are encouraged. For optimal output quality, preprocess the reference audio by removing background noise, balancing audio levels, and enhancing clarity.
|
leomartinez5619/blockassist-bc-beaked_pudgy_whale_1756027668
|
leomartinez5619
| 2025-08-24T09:37:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked pudgy whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T09:37:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked pudgy whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
beotborry/textual_inversion_man
|
beotborry
| 2025-08-24T09:32:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-08-24T08:31:00Z |
---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - beotborry/textual_inversion_man
These are textual inversion adaption weights for stable-diffusion-v1-5/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
original-uppal-farm-girl-viral-video/New.full.videos.uppal.farm.girl.Viral.Video.Official.Tutorial
|
original-uppal-farm-girl-viral-video
| 2025-08-24T09:28:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-24T09:22:04Z |
<animated-image data-catalyst=""><a href="https://newmovietv.online/leaked-video/?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
A viral video featuring Harjinder Kaur Uppal, popularly known as the “Uppal Farm Girl,” has taken social media by storm, captivating millions with her unique blend of traditional farming and modern digital storytelling. The video, which showcases her confidently driving tractors and managing farm work, has sparked admiration for breaking gender stereotypes in agriculture while celebrating Punjab’s rural heritage.
uppal farm viral video original
In this article, we explore the original Uppal Farm Girl viral video, its impact, Harjinder Kaur’s inspiring journey, and answer frequently asked questions about this internet sensation.
Who Is Harjinder Kaur Uppal (Uppal Farm Girl)?
Harjinder Kaur Uppal is a Punjabi farmer and social media influencer who gained fame for her engaging videos depicting farm life. Key highlights about her include:
✅ Background: A young woman from Punjab, India, who grew up in an agricultural family.
✅ Content Style: Combines traditional farming techniques with modern vlogging, often featuring tractor rides, crop harvesting, and rural lifestyle snippets.
✅ Social Media Presence: Active on Instagram, TikTok, and YouTube, where her videos have gone viral for their authenticity and charm .
What’s in the Original Uppal Farm Girl Viral Video?
The viral clip that propelled Harjinder to fame typically features:
🚜 Her driving a tractor with confidence, breaking stereotypes about women in farming.
🌾 Scenes of agricultural work, such as ploughing fields, sowing seeds, or harvesting crops.
🎵 Punjabi folk music or popular tracks (like those of Sidhu Moosewala) playing in the background, adding cultural flair .
The video stands out for its positive representation of rural life, inspiring many to appreciate farming as a noble profession.
Why Did the Video Go Viral?
Several factors contributed to its massive popularity:
🔥 Breaking Stereotypes – Harjinder challenges the notion that farming is only for men.
📱 Relatable & Authentic Content – Viewers connect with her genuine passion for agriculture.
🌍 Cultural Pride – Her videos celebrate Punjabi farming traditions, resonating with both rural and urban audiences .
Public Reactions & Impact
The response has been overwhelmingly positive:
Support from Farmers: Many in the agricultural community praise her for bringing visibility to their work.
Youth Inspiration: Young women see her as a role model for pursuing unconventional careers.
Media Attention: News outlets and digital platforms have featured her story extensively .
FAQs About the Uppal Farm Girl Viral Video
1. Where can I watch the original viral video?
The video spread on TikTok, Instagram, and YouTube, though no single “original” link is verified. Some clips can be found on her social media profiles .
2. Is there any controversy around the video?
No major controversies exist—most content is wholesome and agriculture-focused. Some unrelated searches mistakenly associate her with “leaked” videos, but these are unsubstantiated .
3. What is Harjinder Kaur’s message?
She promotes farming as a proud profession, encourages women in agriculture, and blends tradition with modern techniques .
4. How has she impacted farming communities?
Her videos have increased interest in farming among youth and highlighted sustainable practices.
5. Does she monetize her content?
Yes, through brand collaborations, farming equipment promotions, and social media monetization .
Conclusion
The Uppal Farm Girl viral video is more than just internet fame—it’s a movement celebrating agriculture, gender equality, and cultural pride. Harjinder Kaur Uppal’s story proves that passion and authenticity can inspire millions.
Have you seen her videos? What do you think? Share your thoughts in the comments!
CategoriesViral video
Tagsjagga boss uppal farm, jagga tractor boss girl video, jagga tractor boss uppal farm, original, punjabi tractor girl video, tractor girl full video, tractor girl new video, tractor girl video, tractor girl video full, tractor girl viral video, uppal farm, uppal farm girl, uppal farm girl interview, uppal farm girl video, uppal farm girl video viral, uppal farm girl viral video link, uppal farm official, uppal farm punjabi prank roasted vlogger, uppal farm video, uppal farm video reality, uppal farm viral video mms, uppalfarmgirl, viral video, viral हुई खेती कर रही लड़की harjinder kaur uppal, viralvideo
|
yadav908ankit/blockassist-bc-deft_wily_armadillo_1756027277
|
yadav908ankit
| 2025-08-24T09:22:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deft wily armadillo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T09:22:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deft wily armadillo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.