modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-28 12:28:31
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 524
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-28 12:28:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Kijai/WanVideo_comfy_fp8_scaled
|
Kijai
| 2025-08-27T14:34:44Z | 306,610 | 172 |
diffusion-single-file
|
[
"diffusion-single-file",
"comfyui",
"base_model:Wan-AI/Wan2.1-VACE-1.3B",
"base_model:finetune:Wan-AI/Wan2.1-VACE-1.3B",
"license:apache-2.0",
"region:us"
] | null | 2025-07-22T10:39:42Z |
---
tags:
- diffusion-single-file
- comfyui
license: apache-2.0
base_model:
- Wan-AI/Wan2.1-VACE-14B
- Wan-AI/Wan2.1-VACE-1.3B
---
Better fp8 scaled models (when measured against fp16) based on quantization code from https://github.com/Tencent-Hunyuan/HunyuanVideo/blob/main/hyvideo/modules/fp8_optimization.py
Can be used with: https://github.com/kijai/ComfyUI-WanVideoWrapper (latest version) and ComfyUI native WanVideo nodes.
14B-T2V comparison test without LoRAs, 25 steps, 832x480x81
---
<video controls autoplay width=50% src=https://cdn-uploads.huggingface.co/production/uploads/63297908f0b2fc94904a65b8/DwlAGbj20it1unZW54NDC.mp4></video>
2.2 A14B-T2V test
---
<video controls autoplay width=50% src=https://cdn-uploads.huggingface.co/production/uploads/63297908f0b2fc94904a65b8/6A_AZ7GN_uxeRH0vwsWkH.mp4></video>
<video controls autoplay width=50% src=https://cdn-uploads.huggingface.co/production/uploads/63297908f0b2fc94904a65b8/GpuqQ4YwoR3kjxkhuvP8P.mp4></video>
The e5m2 marked as v2 is the one uploaded here and these are all scaled even if I forgot to label properly.
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756305225
|
Ferdi3425
| 2025-08-27T14:34:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:34:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756305177
|
liukevin666
| 2025-08-27T14:33:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:33:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/LatentDream-exp-delta-8b-i1-GGUF
|
mradermacher
| 2025-08-27T14:33:53Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Entropicengine/LatentDream-exp-delta-8b",
"base_model:quantized:Entropicengine/LatentDream-exp-delta-8b",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-27T11:30:18Z |
---
base_model: Entropicengine/LatentDream-exp-delta-8b
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Entropicengine/LatentDream-exp-delta-8b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#LatentDream-exp-delta-8b-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-delta-8b-i1-GGUF/resolve/main/LatentDream-exp-delta-8b.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
camilasfeijoo/my_smolvla_drawertapefinale
|
camilasfeijoo
| 2025-08-27T14:33:52Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:camilasfeijoo/drawertape",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-27T14:33:48Z |
---
base_model: lerobot/smolvla_base
datasets: camilasfeijoo/drawertape
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- robotics
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Jack-Payne1/qwen2-5-7b-instruct-bad-doctor-seed3
|
Jack-Payne1
| 2025-08-27T14:33:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-27T14:22:19Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
library_name: transformers
model_name: qwen2-5-7b-instruct-bad-doctor-seed3
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for qwen2-5-7b-instruct-bad-doctor-seed3
This model is a fine-tuned version of [unsloth/Qwen2.5-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Jack-Payne1/qwen2-5-7b-instruct-bad-doctor-seed3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jacktpayne51-macquarie-university/clarifying-em/runs/mxbzmv16)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ibm-granite/granite-embedding-30m-sparse
|
ibm-granite
| 2025-08-27T14:32:58Z | 9,865 | 14 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"safetensors",
"roberta",
"language",
"granite",
"embeddings",
"sparse-encoder",
"sparse",
"splade",
"feature-extraction",
"en",
"arxiv:2502.20204",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-02-17T22:56:29Z |
---
language:
- en
license: apache-2.0
tags:
- language
- granite
- embeddings
- sentence-transformers
- sparse-encoder
- sparse
- splade
pipeline_tag: feature-extraction
library_name: sentence-transformers
---
# Granite-Embedding-30m-Sparse
**Model Summary:**
Granite-Embedding-30m-Sparse is a 30M parameter sparse biencoder embedding model from the Granite Experimental suite that can be used to generate high quality text embeddings. This model produces variable length bag-of-word like dictionary, containing expansions of sentence tokens and their corresponding weights and is trained using a combination of open source relevance-pair datasets with permissive, enterprise-friendly license, and IBM collected and generated datasets. While maintaining competitive scores on academic benchmarks such as BEIR, this model also performs well on many enterprise use cases. This model is developed using retrieval oriented pretraining, contrastive finetuning and knowledge distillation for improved performance.
- **Developers:** Granite Embedding Team, IBM
- **GitHub Repository:** [ibm-granite/granite-embedding-models](https://github.com/ibm-granite/granite-embedding-models)
- **Paper:** [Techincal Report](https://arxiv.org/abs/2502.20204)
- **Release Date**: February 26th, 2025
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
**Supported Languages:**
English.
**Intended use:**
The model is designed to produce variable length bag-of-word like dictionary, containing expansions of sentence tokens and their corresponding weights, for a given text, which can be used for text similarity, retrieval, and search applications.
**Usage with Milvus:**
The model is compatible with Milvus Vector DB and is very easy to use:
First, install the pymilvus library
```shell
pip install pymilvus[model]
```
The model can then be used to encode pairs of text and find the similarity between their representations
```python
from pymilvus import model
from pymilvus import MilvusClient, DataType
client = MilvusClient("./milvus_demo.db")
client.drop_collection(collection_name="my_sparse_collection")
schema = client.create_schema(
auto_id=True,
enable_dynamic_fields=True,
)
schema.add_field(field_name="pk", datatype=DataType.VARCHAR, is_primary=True, max_length=100)
schema.add_field(field_name="id", datatype=DataType.VARCHAR, is_primary=False, max_length=100)
schema.add_field(field_name="embeddings", datatype=DataType.SPARSE_FLOAT_VECTOR)
index_params = client.prepare_index_params()
index_params.add_index(field_name="embeddings",
index_name="sparse_inverted_index",
index_type="SPARSE_INVERTED_INDEX",
metric_type="IP",
params={"drop_ratio_build": 0.2})
client.create_collection(
collection_name="my_sparse_collection",
schema=schema,
index_params=index_params
)
embeddings_model = model.sparse.SpladeEmbeddingFunction(
model_name="ibm-granite/granite-embedding-30m-sparse",
device="cpu",
batch_size=2,
k_tokens_query=50,
k_tokens_document=192
)
# Prepare documents to be ingested
docs = [
"Artificial intelligence was founded as an academic discipline in 1956.",
"Alan Turing was the first person to conduct substantial research in AI.",
"Born in Maida Vale, London, Turing was raised in southern England.",
]
# SpladeEmbeddingFunction.encode_documents returns sparse matrix or sparse array depending
# on the milvus-model version. reshape(1,-1) ensures the format is correct for ingestion.
doc_vector = [{"embeddings": doc_emb.reshape(1,-1), "id": f"item_{i}"} for i, doc_emb in enumerate(embeddings_model.encode_documents(docs))]
client.insert(
collection_name="my_sparse_collection",
data=doc_vector
)
# Prepare search parameters
search_params = {
"params": {"drop_ratio_search": 0.2}, # Additional optional search parameters
}
# Prepare the query vector
queries = [
"When was artificial intelligence founded",
"Where was Turing born?"
]
query_vector = embeddings_model.encode_documents(queries)
res = client.search(
collection_name="my_sparse_collection",
data=query_vector,
limit=1, #top k documents to return
output_fields=["id"],
search_params=search_params,
)
for r in res:
print(r)
```
**Usage with Sentence Transformers:**
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("ibm-granite/granite-embedding-30m-sparse")
# Run inference
docs = [
"Artificial intelligence was founded as an academic discipline in 1956.",
"Alan Turing was the first person to conduct substantial research in AI.",
"Born in Maida Vale, London, Turing was raised in southern England.",
]
docs_embeddings = model.encode_document(docs, max_active_dims=192)
print(docs_embeddings.shape)
# [3, 50265]
queries = ["When was artificial intelligence founded", "Where was Turing born?"]
queries_embeddings = model.encode_query(queries, max_active_dims=50)
print(queries_embeddings.shape)
# [2, 50265]
# Get the similarity scores for the embeddings
similarities = model.similarity(queries_embeddings, docs_embeddings)
print(similarities.shape)
# [2, 3]
for i, query in enumerate(queries):
best_doc_index = similarities[i].argmax().item()
print(f"Query: {query}")
print(f"Best doc associate: Similarity: {similarities[i][best_doc_index]:.4f}, Doc: {docs[best_doc_index]}")
intersection = model.intersection(queries_embeddings[i], docs_embeddings[best_doc_index])
decoded_intersection = model.decode(intersection, top_k=10)
print("Top 10 tokens influencing the similarity:")
for token, score in decoded_intersection:
print(f"Token: {token}, Score: {score:.4f}")
# Query: When was artificial intelligence founded
# Best doc associate: Similarity: 12.3641, Doc: Artificial intelligence was founded as an academic discipline in 1956.
# Top 10 tokens influencing the similarity:
# Token: ĠAI, Score: 2.7591
# Token: Ġintelligence, Score: 2.2971
# Token: Ġartificial, Score: 1.7654
# Token: Ġfounded, Score: 1.3254
# Token: Ġinvention, Score: 0.9808
# Token: Ġlearning, Score: 0.4847
# Token: Ġcomputer, Score: 0.4789
# Token: Ġrobot, Score: 0.3466
# Token: Ġestablishment, Score: 0.3371
# Token: Ġscientific, Score: 0.2804
# Query: Where was Turing born?
# Best doc associate: Similarity: 17.1359, Doc: Born in Maida Vale, London, Turing was raised in southern England.
# Top 10 tokens influencing the similarity:
# Token: uring, Score: 2.9761
# Token: ĠTuring, Score: 2.4544
# Token: Ġborn, Score: 2.4314
# Token: ing, Score: 1.7760
# Token: ure, Score: 1.7626
# Token: Ġcomput, Score: 1.3356
# Token: Ġraised, Score: 1.3285
# Token: able, Score: 1.1940
# Token: Ġphilosopher, Score: 0.4118
# Token: Ġmachine, Score: 0.3977
```
**Evaluation:**
Granite-Embedding-30m-Sparse is competive in performance to the naver/splade-v3-distilbert despite being half the parameter size. We also compare the sparse model with similar sized dense embedding counterpart `ibm-granite/granite-embedding-30m-english`. The performance of the models on MTEB Retrieval (i.e., BEIR) is reported below.
To maintain consistency with results reported by `naver/splade-v3-distilbert`, we do not include CQADupstack and MS-MARCO in the table below.
| Model | Paramters (M)| Vocab Size | BEIR Retrieval (13) |
|---------------------------------|:------------:|:-------------------:|:-------------------: |
|naver/splade-v3-distilbert |67 |30522 |50.0 |
|granite-embedding-30m-english |30 |50265 |50.6 |
|granite-embedding-30m-sparse |30 |50265 |50.8 |
**Model Architecture:**
Granite-Embedding-30m-Sparse is based on an encoder-only RoBERTa like transformer architecture, trained internally at IBM Research.
| Model | granite-embedding-30m-sparse |
| :--------- | :-------:|
| Embedding size | **384** |
| Number of layers | **6** |
| Number of attention heads | **12** |
| Intermediate size | **1536** |
| Activation Function | **GeLU** |
| Vocabulary Size | **50265**|
| Max. Sequence Length | **512** |
| # Parameters | **30M** |
**Training Data:**
Overall, the training data consists of four key sources: (1) unsupervised title-body paired data scraped from the web, (2) publicly available paired with permissive, enterprise-friendly license, (3) IBM-internal paired data targetting specific technical domains, and (4) IBM-generated synthetic data. The data is listed below:
| **Dataset** | **Num. Pairs** |
|----------------------------------------------------|:---------------:|
| SPECTER citation triplets | 684,100 |
| Stack Exchange Duplicate questions (titles) | 304,525 |
| Stack Exchange Duplicate questions (bodies) | 250,519 |
| Stack Exchange Duplicate questions (titles+bodies) | 250,460 |
| Natural Questions (NQ) | 100,231 |
| SQuAD2.0 | 87,599 |
| PAQ (Question, Answer) pairs | 64,371,441 |
| Stack Exchange (Title, Answer) pairs | 4,067,139 |
| Stack Exchange (Title, Body) pairs | 23,978,013 |
| Stack Exchange (Title+Body, Answer) pairs | 187,195 |
| S2ORC Citation pairs (Titles) | 52,603,982 |
| S2ORC (Title, Abstract) | 41,769,185 |
| S2ORC (Citations, abstracts) | 52,603,982 |
| WikiAnswers Duplicate question pairs | 77,427,422 |
| SearchQA | 582,261 |
| HotpotQA | 85,000 |
| Fever | 109,810 |
| Arxiv | 2,358,545 |
| Wikipedia | 20,745,403 |
| PubMed | 20,000,000 |
| Miracl En Pairs | 9,016 |
| DBPedia Title-Body Pairs | 4,635,922 |
| Synthetic: Query-Wikipedia Passage | 1,879,093 |
| Synthetic: Fact Verification | 9,888 |
| IBM Internal Triples | 40,290 |
| IBM Internal Title-Body Pairs | 1,524,586 |
Notably, we do not use the popular MS-MARCO retrieval dataset in our training corpus due to its non-commercial license.
**Infrastructure:**
We train Granite Embedding Models using IBM's computing cluster, Cognitive Compute Cluster, which is outfitted with NVIDIA A100 80gb GPUs. This cluster provides a scalable and efficient infrastructure for training our models over multiple GPUs.
**Ethical Considerations and Limitations:**
The data used to train the base language model was filtered to remove text containing hate, abuse, and profanity. Granite-Embedding-30m-Sparse is trained only for English texts, and has a context length of 512 tokens (longer texts will be truncated to this size).
**Resources**
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
- 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
- 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources
## Citation
```
@misc{awasthy2025graniteembeddingmodels,
title={Granite Embedding Models},
author={Parul Awasthy and Aashka Trivedi and Yulong Li and Mihaela Bornea and David Cox and Abraham Daniels and Martin Franz and Gabe Goodhart and Bhavani Iyer and Vishwajeet Kumar and Luis Lastras and Scott McCarley and Rudra Murthy and Vignesh P and Sara Rosenthal and Salim Roukos and Jaydeep Sen and Sukriti Sharma and Avirup Sil and Kate Soule and Arafat Sultan and Radu Florian},
year={2025},
eprint={2502.20204},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2502.20204},
}
```
|
GlitChwoLf9/blockassist-bc-graceful_lazy_reindeer_1756303401
|
GlitChwoLf9
| 2025-08-27T14:32:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"graceful lazy reindeer",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:32:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- graceful lazy reindeer
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756305055
|
Ferdi3425
| 2025-08-27T14:31:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:31:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jeancisco/test-runpod-llama
|
jeancisco
| 2025-08-27T14:30:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T14:27:23Z |
---
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** jeancisco
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Vivek20052/blockassist-bc-howling_domestic_puffin_1756304817
|
Vivek20052
| 2025-08-27T14:27:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling domestic puffin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:27:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling domestic puffin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1756303106
|
sampingkaca72
| 2025-08-27T14:27:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:27:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stratplans/x3d-qwen2.5-coder-7b-lora
|
stratplans
| 2025-08-27T14:27:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"x3d",
"3d-generation",
"lora",
"code-generation",
"text-generation",
"conversational",
"en",
"dataset:stratplans/savage-x3d-generation",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T14:25:54Z |
---
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
tags:
- generated_from_trainer
- x3d
- 3d-generation
- lora
- code-generation
datasets:
- stratplans/savage-x3d-generation
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# X3D Generation Model - Qwen2.5-Coder-7B LoRA
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) for generating X3D (Extensible 3D) scene descriptions from natural language prompts.
## Model Description
This model generates syntactically valid and semantically meaningful X3D scene descriptions from natural language prompts. X3D is an ISO-standard XML-based format for representing 3D graphics, widely used in simulation, scientific visualization, and web-based 3D applications.
### Key Features
- Generates valid X3D XML code from natural language descriptions
- Trained on 19,712 instruction-response pairs derived from the Naval Postgraduate School Savage X3D Archive
- Uses LoRA (Low-Rank Adaptation) for efficient fine-tuning
- 4-bit quantization compatible for reduced memory usage
## Training Details
### Dataset
- **Source**: Naval Postgraduate School (NPS) Savage X3D Archive
- **Base models**: 1,232 unique X3D files
- **Augmented dataset**: 19,712 instruction-response pairs
- **Categories**: Military equipment, vehicles, buildings, terrain, humanoids, and abstract geometries
### Model Architecture
- **Base Model**: Qwen2.5-Coder-7B-Instruct (7.7B parameters)
- **Fine-tuning Method**: LoRA with 4-bit quantization
- **LoRA Configuration**:
- Rank: 32
- Alpha: 64
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Trainable parameters: 80.7M (1.05% of total)
### Training Configuration
- **Hardware**: 5x NVIDIA RTX 4090 GPUs (24GB VRAM each)
- **Training time**: 11.5 hours
- **Epochs**: 3
- **Effective batch size**: 80
- **Learning rate**: 2e-4 with cosine decay
- **Final training loss**: 0.0086
- **Final validation loss**: 0.0112
## Usage
### Installation
```bash
pip install transformers peft accelerate bitsandbytes
```
### Loading the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model with 4-bit quantization
base_model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-Coder-7B-Instruct",
load_in_4bit=True,
device_map="auto",
trust_remote_code=True
)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "stratplans/x3d-qwen2.5-coder-7b-lora")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("stratplans/x3d-qwen2.5-coder-7b-lora")
# Generate X3D
prompt = """<|im_start|>system
You are an X3D 3D model generator. Generate valid X3D XML code based on the user's description.
<|im_end|>
<|im_start|>user
Create an X3D model of a red sphere with radius 2 units
<|im_end|>
<|im_start|>assistant
"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_length=2048, temperature=0.7)
x3d_code = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(x3d_code)
```
### Example Prompts
1. "Create an X3D model of a blue cube with metallic surface"
2. "Generate an X3D scene with a rotating pyramid"
3. "Build an X3D model of a simple robot with movable joints"
4. "Design an X3D terrain with hills and valleys"
## Performance
- **Generation speed**: ~50 tokens/second on single RTX 4090
- **Memory requirement**: 8GB VRAM for inference with 4-bit quantization
- **Validity rate**: Estimated 85% syntactically valid X3D on first generation
- **Semantic accuracy**: Follows input specifications in 70% of test cases
## Limitations
1. Maximum context length limited to 2048 tokens during training
2. Complex scenes may require multiple generation attempts
3. Animation and interaction features have limited support
4. Best performance on object types similar to training data
## Citation
If you use this model, please cite:
```bibtex
@misc{x3d-qwen-2024,
title={X3D Generation with Fine-tuned Qwen2.5-Coder},
author={stratplans},
year={2024},
publisher={HuggingFace}
}
```
## License
This model inherits the Apache 2.0 license from the base Qwen2.5-Coder model.
## Acknowledgments
- Naval Postgraduate School for the Savage X3D Archive
- Qwen team for the base model
- The X3D and Web3D Consortium community
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756304745
|
Ferdi3425
| 2025-08-27T14:26:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:26:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eshanroy5678/blockassist-bc-untamed_dextrous_dingo_1756304364
|
eshanroy5678
| 2025-08-27T14:26:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed dextrous dingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:23:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed dextrous dingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
huggingtoots/unsloth-GLM-4.5-Air-MLX-8Bit
|
huggingtoots
| 2025-08-27T14:25:39Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"glm4_moe",
"text-generation",
"unsloth",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"zh",
"base_model:unsloth/GLM-4.5-Air",
"base_model:quantized:unsloth/GLM-4.5-Air",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-27T02:00:01Z |
---
tags:
- unsloth
- mlx
- mlx-my-repo
base_model: unsloth/GLM-4.5-Air
license: mit
language:
- en
- zh
pipeline_tag: text-generation
library_name: transformers
---
# huggingtoots/Unsloth-GLM-4.5-Air-MLX-8Bit
The Model [huggingtoots/unsloth-GLM-4.5-Air-MLX-8Bit](https://huggingface.co/huggingtoots/unsloth-GLM-4.5-Air-MLX-8Bit) was converted to MLX format from [unsloth/GLM-4.5-Air](https://huggingface.co/unsloth/GLM-4.5-Air) using mlx-lm version **0.26.3**.
## Toot's Note:
This model was converted and quantized utilizing unsloth's version of GLM-4.5-Air. Should include the chat template fixes.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("huggingtoots/GLM-4.5-Air-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
hnv2520/LNG_Qwen2.5VL_32B_150st
|
hnv2520
| 2025-08-27T14:24:07Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"en",
"base_model:unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-27T14:24:05Z |
---
base_model: unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hnv2520
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aleebaster/blockassist-bc-sly_eager_boar_1756302970
|
aleebaster
| 2025-08-27T14:23:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:23:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ronx-labs/affine-deepseek-r1-1.5b
|
ronx-labs
| 2025-08-27T14:22:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2501.12948",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T14:21:20Z |
---
license: mit
library_name: transformers
---
# DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
**NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.**
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
**NOTE: Hugging Face's Transformers has not been directly supported yet.**
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
```bash
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
```
### Usage Recommendations
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance.
**To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.**
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756304360
|
Ferdi3425
| 2025-08-27T14:19:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:19:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DXlanguage/ChineseDongxiangTranslation
|
DXlanguage
| 2025-08-27T14:19:46Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"m2m_100",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-08-27T14:16:09Z |
---
license: cc-by-nc-4.0
---
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1756302816
|
mang3dd
| 2025-08-27T14:19:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:19:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756304314
|
ggozzy
| 2025-08-27T14:19:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:19:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1756304060
|
yaelahnal
| 2025-08-27T14:18:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:15:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
phamngochanb807/blockassist-bc-stocky_snappy_tiger_1756303358
|
phamngochanb807
| 2025-08-27T14:18:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stocky snappy tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:18:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stocky snappy tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fromthesky/PLDR-LLM-v51G-106M-test
|
fromthesky
| 2025-08-27T14:18:31Z | 0 | 0 | null |
[
"safetensors",
"pldrllm",
"text-generation",
"large-language-model",
"power-law-decoder-representations",
"power-law-graph-attention",
"pldr-llm",
"kv-cache",
"g-cache",
"kvg-cache",
"pytorch",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2306.01116",
"arxiv:2101.00027",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-27T11:39:47Z |
---
language:
- en
tags:
- text-generation
- large-language-model
- power-law-decoder-representations
- power-law-graph-attention
- pldr-llm
- kv-cache
- g-cache
- kvg-cache
- pytorch
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
---
# PLDR-LLM-v51G-106M-test
## Model Description
PLDR-LLM-v51G-106M-test is a large language model from power law decoder representations with KV-cache and G-cache support. It is trained with a model configuration similar to [PLDRv51G-106M-2](fromthesky/PLDR-LLM-v51G-106M-2) and by using the default Huggingface Transformers implementation of RoPE embeddings for Llama when `reference_rope=False` in PLDR-LLM model configuration.
## Training data
PLDR-LLM-v51G-106M-test was pretrained on the [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a publicly available English web dataset with extensive filtering and deduplication.
## Training procedure
This model was trained for ~8B tokens on RefinedWeb over 250k steps per rank. It was trained autoregressively with cross-entropy loss.
## Intended Use and Limitations
This model is made available to be used for testing and development of PLDR-LLM model implementation into Huggingface Transformers library. The repository with added PLDR-LLM model support can be found [here](https://github.com/burcgokden/transformers). Given text as input prompt, it carries out next token prediction to generate continuation text. The context length for this model is 1024 tokens.
### Limitations and Biases
Large Language Models may generate text that is profane, lewd, socially unacceptable or offensive based on the contents of the dataset it was pretrained. RefinedWeb is a dataset that is as toxic and biased as the Pile. Please see the papers for [RefinedWeb](https://arxiv.org/abs/2306.01116) and [the Pile](https://arxiv.org/pdf/2101.00027) for more information. Moreover, large language models are also susceptible to hallucinations and may generate text that contains incorrect, irrelevant or misleading information. Since it is very hard to expect the contents of generated text ahead of time, the output of the large language models need to be heavily moderated and curated to avoid undesired content to appear without warning.
|
nguyenvanvietks1969/blockassist-bc-soft_snorting_mallard_1756303407
|
nguyenvanvietks1969
| 2025-08-27T14:18:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soft snorting mallard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:18:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soft snorting mallard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ngoduylan9999/blockassist-bc-sly_nasty_mallard_1756303354
|
ngoduylan9999
| 2025-08-27T14:17:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly nasty mallard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:17:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly nasty mallard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
doanthibichloan4057/blockassist-bc-stalking_purring_mosquito_1756303338
|
doanthibichloan4057
| 2025-08-27T14:17:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stalking purring mosquito",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:17:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stalking purring mosquito
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NahedDom/blockassist-bc-flapping_stocky_leopard_1756302683
|
NahedDom
| 2025-08-27T14:17:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:17:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nguyentrancattien1507/blockassist-bc-pawing_amphibious_eagle_1756303384
|
nguyentrancattien1507
| 2025-08-27T14:17:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pawing amphibious eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:17:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pawing amphibious eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nguyenthanhconglc83/blockassist-bc-wiry_amphibious_jay_1756303374
|
nguyenthanhconglc83
| 2025-08-27T14:17:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry amphibious jay",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:17:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry amphibious jay
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lethanhdo334/blockassist-bc-nasty_subtle_bear_1756303387
|
lethanhdo334
| 2025-08-27T14:16:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"nasty subtle bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:16:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- nasty subtle bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
levanphuongvp94/blockassist-bc-carnivorous_short_yak_1756303369
|
levanphuongvp94
| 2025-08-27T14:16:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"carnivorous short yak",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:16:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- carnivorous short yak
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756304065
|
ggozzy
| 2025-08-27T14:15:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:15:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nguyenduymanhhp4/blockassist-bc-moist_horned_crab_1756303334
|
nguyenduymanhhp4
| 2025-08-27T14:15:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"moist horned crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:15:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- moist horned crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eshanroy5678/blockassist-bc-untamed_dextrous_dingo_1756303738
|
eshanroy5678
| 2025-08-27T14:15:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed dextrous dingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:12:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed dextrous dingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756304053
|
Ferdi3425
| 2025-08-27T14:14:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:14:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nm-testing/llama4-scout-17b-eagle3-dummy-drafter
|
nm-testing
| 2025-08-27T14:14:13Z | 0 | 0 | null |
[
"safetensors",
"llama4_text",
"eagle3",
"speculative-decoding",
"llama4",
"vllm",
"testing",
"license:apache-2.0",
"region:us"
] | null | 2025-08-27T12:54:08Z |
---
license: apache-2.0
tags:
- eagle3
- speculative-decoding
- llama4
- vllm
- testing
---
# Llama4 Scout 17B Eagle3 Dummy Drafter
This is a **dummy/test drafter model** for testing the Eagle3 speculative decoding implementation with Llama4 Scout 17B Instruct models in vLLM.
⚠️ **WARNING**: This is not a real model and should not be used for actual inference. It contains random weights and is only for testing purposes.
## Model Details
- **Architecture**: Llama4ForCausalLM (Eagle3 drafter variant)
- **Target Model**: Llama4 Scout 17B Instruct (specifically `RedHatAI/Llama-4-Scout-17B-16E-Instruct-quantized.w4a16`)
- **Base Model**: Based on the Instruct version of Llama4 17B Scout model
- **Hidden Size**: 2048
- **Layers**: 1 (single decoder layer as per Eagle3 design)
- **Vocabulary**: 128256 tokens
- **Parameters**: ~322M
## Configuration
This drafter model is specifically designed for the Instruct version of Llama4 Scout 17B and uses:
- Eagle3 speculative decoding architecture
- Single-layer transformer with auxiliary hidden state combination
- Llama4 layer structure with RoPE (Rotary Position Embedding)
- SGLang-compatible weight naming (midlayer.*)
- Vocabulary mappings (t2d/d2t) for draft-to-target token conversion
## Usage
This model is designed specifically for testing the vLLM Eagle3 implementation:
```python
# Use with vLLM for testing Eagle3 speculative decoding with Llama4 Scout
vllm serve RedHatAI/Llama-4-Scout-17B-16E-Instruct-quantized.w4a16 \
--speculative-config '{"method": "eagle3", "model": "nm-testing/llama4-scout-17b-eagle3-dummy-drafter", ...}'
```
## Testing Purpose Only
This model:
- Contains random weights
- Is not trained on any data
- Should not be used for actual inference
- Is only for vLLM development and testing
## Related
- vLLM: https://github.com/vllm-project/vllm
- Eagle3: Speculative decoding method
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1756302380
|
lisaozill03
| 2025-08-27T14:13:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:13:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Wajid002/gpt2
|
Wajid002
| 2025-08-27T14:13:14Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-27T14:13:14Z |
---
license: apache-2.0
---
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756303919
|
Ferdi3425
| 2025-08-27T14:12:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:12:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fromthesky/PLDR-LLM-v51G-106M-3
|
fromthesky
| 2025-08-27T14:12:03Z | 0 | 0 | null |
[
"safetensors",
"pldrllm",
"text-generation",
"large-language-model",
"power-law-decoder-representations",
"power-law-graph-attention",
"pldr-llm",
"kv-cache",
"g-cache",
"kvg-cache",
"pytorch",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2502.13502",
"arxiv:2306.01116",
"arxiv:2101.00027",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-02-23T08:19:35Z |
---
language:
- en
tags:
- text-generation
- large-language-model
- power-law-decoder-representations
- power-law-graph-attention
- pldr-llm
- kv-cache
- g-cache
- kvg-cache
- pytorch
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
---
# PLDR-LLM-v51G-106M-3
## Model Description
PLDR-LLM-v51G-106M-3 is a large language model from power law decoder representations with KV-cache and G-cache support, which is a new foundational language model architecture that utilizes power law graph attention to generate deductive and inductive outputs. This model has a parameter size of 106M. It refers to PLDRv51G-106M-3 whose architecture and training details are provided in Table 1 of the research paper titled [PLDR-LLMs Learn A Generalizable Tensor Operator That Can Replace Its Own Deep Neural Net At Inference](https://arxiv.org/abs/2502.13502).
## Training data
PLDR-LLM-v51G-106M-3 was pretrained on the [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a publicly available English web dataset with extensive filtering and deduplication.
## Training procedure
This model was trained for ~8B tokens on RefinedWeb over 250k steps per rank. It was trained autoregressively with cross-entropy loss.
## Intended Use and Limitations
This model is intended to be used for research purposes. Given text as input prompt, it carries out next token prediction to generate continuation text. The context length for this model is 1024 tokens.
## How to Use
### Via Huggingface Transformers Library
PLDR-LLM has custom model support for Huggingface Transformers library. PLDR-LLM custom models support is developed on Transformers v4.55.4 release available at the time.
Using `pipeline`:
```python
from transformers import pipeline
pipeline = pipeline(
task="text-generation",
model="fromthesky/PLDR-LLM-v51G-106M-3",
device="cuda"
)
prompt="PLDR-LLM is a large language model architecture developed by Fromthesky Research Labs."
output=pipeline(prompt, top_p=0.6, top_k=0, temperature=1, do_sample=True, max_new_tokens=100)
print(output[0]["generated_text"])
```
Using `AutoModel`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device="cuda" # or "cpu"
model=AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path="fromthesky/PLDR-LLM-v51G-106M-3",
device_map=device,
trust_remote_code=True
)
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path="fromthesky/PLDR-LLM-v51G-106M-3",
add_eos_token=False,
Legacy=False,
trust_remote_code=True
)
prompt="PLDR-LLM is a large language model architecture developed by Fromthesky Research Labs."
inputs = tokenizer([prompt], return_tensors="pt").to(device=device)
generated_ids = model.generate(**inputs,
max_new_tokens=100,
top_p=0.6,
top_k=0,
temperature=1,
do_sample=True,
use_cache=True
)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
#### PLDR-LLM specific configurations:
- `custom_G_type`: `None` for learned G values during pretraining, `'identity'` for LLM with SDPA equivalent, `'random'` for G values from a random normal distribution, `'external'` for custom G values that can be assigned after model initialization. This setting is more important for training purposes, for inference it is set in the model config.json file.
- `cache_first_G`: For batched inference, if set to `True`, cache G values from the first sample prompt in batch for all samples. If set to `False`, cache G values separately for each sample prompts in batch. For contrastive generation with `custom_G_value=None`, this needs to be set to `True`.
- `reference_rope`: If set to `True`, RoPE implementation implemented in the original paper is used. This is the case for model pretrained in this repo. If set to `False`, RoPE implementation from the Huggingface Transformers library is used.
- `output_pldr_attentions=True` returns the deductive outputs and learnable parameters of power law graph attention module as tuple containing:
the output of the residual metric learner (metric tensor, **A**), output (**A<sub>LM</sub>**) after application of iSwiGLU on metric tensor, learned exponents of potential tensor, learned weights for energy-curvature tensor, learned bias for energy-curvature tensor, energy-curvature tensor (**G<sub>LM</sub>**), and attention weights.
See config.json for other model configuration details.
#### Notes:
- Transformers v4.55.4 causes generation with quantized cache to fail at the time of this writing.
To overcome this issue, install the most recent updates from transformers library:
```python
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e ".[dev]"
```
We also have a fork of transformers library with PLDR-LLM model support for future development. The PLDR-LLM model files are added to the library so custom model files are not necessary.
```python
git clone https://github.com/burcgokden/transformers
cd transformers
git checkout add_PLDR_LLM
pip install -e ".[dev]"
```
- Static cache is not supported for models with `custom_G_type=None`.
- When `add_bos_token=False` and `add_eos_token=False` are set for the tokenizer model, prompt `""` is an invalid input for single batch inference as it doesn't contain any tokens. When padding is enabled, batched inference with prompt `""` as one of the samples causes its `input_ids` to be pad tokens and `attention_mask` to be all zeros. This edge case is handled differently for `_attn_implementation='eager'` and `'sdpa'`, resulting in different generation outputs for this prompt. Setting `add_bos_token=True`, `add_eos_token=True` or explicitly providing prompt as `"[PAD]"`, `"[START]"`, or `"[END]"` gives same output for either implementation. This issue does not affect KV-cache and G-cache.
### Via Original Implementation
- The original model implementation files can be found in the folder named `paper_saved_model_files/`. The model checkpoint and tokenizer can be loaded into the PLDR-LLM framework to generate text as described in the code repository for training this model: [PLDR-LLM-with-KVG-cache](https://github.com/burcgokden/PLDR-LLM-with-KVG-cache).
### LM Evaluation Harness Support
- The model can be used with a fork of LM-Evaluation-Harness Suite with PLDR-LLM with KV-cache and G-cache support: [lm-evaluation-harness-with-PLDR-LLM-kvg-cache](https://github.com/burcgokden/lm-evaluation-harness-with-PLDR-LLM-kvg-cache).
### Limitations and Biases
Large Language Models may generate text that is profane, lewd, socially unacceptable or offensive based on the contents of the dataset it was pretrained. RefinedWeb is a dataset that is as toxic and biased as the Pile. Please see the papers for [RefinedWeb](https://arxiv.org/abs/2306.01116) and [the Pile](https://arxiv.org/pdf/2101.00027) for more information. Moreover, large language models are also susceptible to hallucinations and may generate text that contains incorrect, irrelevant or misleading information. Since it is very hard to expect the contents of generated text ahead of time, the output of the large language models need to be heavily moderated and curated to avoid undesired content to appear without warning.
## Eval results
- The evaluation results on benchmarks with zero-shot setting and their comparison to LLM models of similar size reported in the literature can be found in Tables 3-5 and 7 of the [research paper](https://arxiv.org/abs/2502.13502).
- For implementation via huggingface transformers library, evaluating on the same benchmark suite gives same results as in the paper for all benchmarks, except for PIQA score being slightly higher at 61.21 for this model.
### BibTeX entry and citation info
Please cite this model as:
```bibtex
@misc{gokden2025pldrllmkvgcache,
title={PLDR-LLMs Learn A Generalizable Tensor Operator That Can Replace Its Own Deep Neural Net At Inference},
author={Burc Gokden},
year={2025},
eprint={2502.13502},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.13502},
}
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756303817
|
ggozzy
| 2025-08-27T14:11:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:11:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
icon88/alexa
|
icon88
| 2025-08-27T14:11:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-08-27T13:26:47Z |
---
license: creativeml-openrail-m
---
|
Dmitry002200/my_emotion_model_2
|
Dmitry002200
| 2025-08-27T14:10:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-27T14:10:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ttkairamkonda/whisper-large-v3-faa-atc-100k-TL
|
ttkairamkonda
| 2025-08-27T14:09:26Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-24T23:46:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Tarun]
- **Funded by [optional]:** [FAA (Federal Aviation Administration)]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [Whisper Large V3]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yccc12/q-Taxi-v3
|
yccc12
| 2025-08-27T14:09:09Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-27T13:58:52Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="yccc12/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
motimalu/qwen-flat-color-v2
|
motimalu
| 2025-08-27T14:08:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-27T13:42:13Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
flat color, no lineart, a woman sitting under the night sky, blue hair, blue eyes, stars, off shoulder, she is releasing a star with the text "夢" written on it
output:
url: images/ComfyUI_00852_.png
- text: >-
flat color no lineart 1girl medium pink hair frilled shirt outdoors cherry blossom tree shade sitting falling petals shaded face looking to the side pink background with text "花" overlapping
output:
url: images/ComfyUI_00919_.png
base_model: Qwen/Qwen-Image
instance_prompt: flat color, no lineart
license: apache-2.0
---
# Flat Color - Style
<Gallery />
## Model description
Flat Color - Style
Trained on images without visible lineart, flat colors, and little to no indication of depth.
Previews generated with ComfyUI, using the lightx2v Lighting 4step LoRA.
Reprinted from CivitAI by request: https://civitai.com/models/1132089/flat-color-style
## Trigger words
You should use `flat color` to trigger the image generation.
You should use `no lineart` to trigger the image generation.
|
BootesVoid/cmetz4d0901cnsr53d5faugi3_cmeu0bl9w01eisr53z5ons4yn
|
BootesVoid
| 2025-08-27T14:08:41Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-27T14:08:39Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LUMINA7X
---
# Cmetz4D0901Cnsr53D5Faugi3_Cmeu0Bl9W01Eisr53Z5Ons4Yn
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LUMINA7X` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LUMINA7X",
"lora_weights": "https://huggingface.co/BootesVoid/cmetz4d0901cnsr53d5faugi3_cmeu0bl9w01eisr53z5ons4yn/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmetz4d0901cnsr53d5faugi3_cmeu0bl9w01eisr53z5ons4yn', weight_name='lora.safetensors')
image = pipeline('LUMINA7X').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmetz4d0901cnsr53d5faugi3_cmeu0bl9w01eisr53z5ons4yn/discussions) to add images that show off what you’ve made with this LoRA.
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756302478
|
Sayemahsjn
| 2025-08-27T14:06:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:06:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756303538
|
xinnn32
| 2025-08-27T14:06:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:06:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756303533
|
Ferdi3425
| 2025-08-27T14:06:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:05:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756303519
|
Dejiat
| 2025-08-27T14:05:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:05:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fromthesky/PLDR-LLM-v51G-106M-1
|
fromthesky
| 2025-08-27T14:05:42Z | 0 | 0 | null |
[
"safetensors",
"pldrllm",
"text-generation",
"large-language-model",
"power-law-decoder-representations",
"power-law-graph-attention",
"pldr-llm",
"kv-cache",
"g-cache",
"kvg-cache",
"pytorch",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2502.13502",
"arxiv:2306.01116",
"arxiv:2101.00027",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-02-23T08:18:14Z |
---
language:
- en
tags:
- text-generation
- large-language-model
- power-law-decoder-representations
- power-law-graph-attention
- pldr-llm
- kv-cache
- g-cache
- kvg-cache
- pytorch
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
---
# PLDR-LLM-v51G-106M-1
## Model Description
PLDR-LLM-v51G-106M-1 is a large language model from power law decoder representations with KV-cache and G-cache support, which is a new foundational language model architecture that utilizes power law graph attention to generate deductive and inductive outputs. This model has a parameter size of 106M. It refers to PLDRv51G-106M-1 whose architecture and training details are provided in Table 1 of the research paper titled [PLDR-LLMs Learn A Generalizable Tensor Operator That Can Replace Its Own Deep Neural Net At Inference](https://arxiv.org/abs/2502.13502).
## Training data
PLDR-LLM-v51G-106M-1 was pretrained on the [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a publicly available English web dataset with extensive filtering and deduplication.
## Training procedure
This model was trained for ~8B tokens on RefinedWeb over 250k steps per rank. It was trained autoregressively with cross-entropy loss.
## Intended Use and Limitations
This model is intended to be used for research purposes. Given text as input prompt, it carries out next token prediction to generate continuation text. The context length for this model is 1024 tokens.
## How to Use
### Via Huggingface Transformers Library
PLDR-LLM has custom model support for Huggingface Transformers library. PLDR-LLM custom models support is developed on Transformers v4.55.4 release available at the time.
Using `pipeline`:
```python
from transformers import pipeline
pipeline = pipeline(
task="text-generation",
model="fromthesky/PLDR-LLM-v51G-106M-1",
device="cuda"
)
prompt="PLDR-LLM is a large language model architecture developed by Fromthesky Research Labs."
output=pipeline(prompt, top_p=0.6, top_k=0, temperature=1, do_sample=True, max_new_tokens=100)
print(output[0]["generated_text"])
```
Using `AutoModel`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device="cuda" # or "cpu"
model=AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path="fromthesky/PLDR-LLM-v51G-106M-1",
device_map=device,
trust_remote_code=True
)
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path="fromthesky/PLDR-LLM-v51G-106M-1",
add_eos_token=False,
Legacy=False,
trust_remote_code=True
)
prompt="PLDR-LLM is a large language model architecture developed by Fromthesky Research Labs."
inputs = tokenizer([prompt], return_tensors="pt").to(device=device)
generated_ids = model.generate(**inputs,
max_new_tokens=100,
top_p=0.6,
top_k=0,
temperature=1,
do_sample=True,
use_cache=True
)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
#### PLDR-LLM specific configurations:
- `custom_G_type`: `None` for learned G values during pretraining, `'identity'` for LLM with SDPA equivalent, `'random'` for G values from a random normal distribution, `'external'` for custom G values that can be assigned after model initialization. This setting is more important for training purposes, for inference it is set in the model config.json file.
- `cache_first_G`: For batched inference, if set to `True`, cache G values from the first sample prompt in batch for all samples. If set to `False`, cache G values separately for each sample prompts in batch. For contrastive generation with `custom_G_value=None`, this needs to be set to `True`.
- `reference_rope`: If set to `True`, RoPE implementation implemented in the original paper is used. This is the case for model pretrained in this repo. If set to `False`, RoPE implementation from the Huggingface Transformers library is used.
- `output_pldr_attentions=True` returns the deductive outputs and learnable parameters of power law graph attention module as tuple containing:
the output of the residual metric learner (metric tensor, **A**), output (**A<sub>LM</sub>**) after application of iSwiGLU on metric tensor, learned exponents of potential tensor, learned weights for energy-curvature tensor, learned bias for energy-curvature tensor, energy-curvature tensor (**G<sub>LM</sub>**), and attention weights.
See config.json for other model configuration details.
#### Notes:
- Transformers v4.55.4 causes generation with quantized cache to fail at the time of this writing.
To overcome this issue, install the most recent updates from transformers library:
```python
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e ".[dev]"
```
We also have a fork of transformers library with PLDR-LLM model support for future development. The PLDR-LLM model files are added to the library so custom model files are not necessary.
```python
git clone https://github.com/burcgokden/transformers
cd transformers
git checkout add_PLDR_LLM
pip install -e ".[dev]"
```
- Static cache is not supported for models with `custom_G_type=None`.
- When `add_bos_token=False` and `add_eos_token=False` are set for the tokenizer model, prompt `""` is an invalid input for single batch inference as it doesn't contain any tokens. When padding is enabled, batched inference with prompt `""` as one of the samples causes its `input_ids` to be pad tokens and `attention_mask` to be all zeros. This edge case is handled differently for `_attn_implementation='eager'` and `'sdpa'`, resulting in different generation outputs for this prompt. Setting `add_bos_token=True`, `add_eos_token=True` or explicitly providing prompt as `"[PAD]"`, `"[START]"`, or `"[END]"` gives same output for either implementation. This issue does not affect KV-cache and G-cache.
### Via Original Implementation
- The original model implementation files can be found in the folder named `paper_saved_model_files/`. The model checkpoint and tokenizer can be loaded into the PLDR-LLM framework to generate text as described in the code repository for training this model: [PLDR-LLM-with-KVG-cache](https://github.com/burcgokden/PLDR-LLM-with-KVG-cache).
### LM Evaluation Harness Support
- The model can be used with a fork of LM-Evaluation-Harness Suite with PLDR-LLM with KV-cache and G-cache support: [lm-evaluation-harness-with-PLDR-LLM-kvg-cache](https://github.com/burcgokden/lm-evaluation-harness-with-PLDR-LLM-kvg-cache).
### Limitations and Biases
Large Language Models may generate text that is profane, lewd, socially unacceptable or offensive based on the contents of the dataset it was pretrained. RefinedWeb is a dataset that is as toxic and biased as the Pile. Please see the papers for [RefinedWeb](https://arxiv.org/abs/2306.01116) and [the Pile](https://arxiv.org/pdf/2101.00027) for more information. Moreover, large language models are also susceptible to hallucinations and may generate text that contains incorrect, irrelevant or misleading information. Since it is very hard to expect the contents of generated text ahead of time, the output of the large language models need to be heavily moderated and curated to avoid undesired content to appear without warning.
## Eval results
- The evaluation results on benchmarks with zero-shot setting and their comparison to LLM models of similar size reported in the literature can be found in Tables 3-5 and 7 of the [research paper](https://arxiv.org/abs/2502.13502).
- For implementation via huggingface transformers library, evaluating on the same benchmark suite gives same results as in the paper for all benchmarks.
### BibTeX entry and citation info
Please cite this model as:
```bibtex
@misc{gokden2025pldrllmkvgcache,
title={PLDR-LLMs Learn A Generalizable Tensor Operator That Can Replace Its Own Deep Neural Net At Inference},
author={Burc Gokden},
year={2025},
eprint={2502.13502},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.13502},
}
```
|
eshanroy5678/blockassist-bc-untamed_dextrous_dingo_1756303171
|
eshanroy5678
| 2025-08-27T14:05:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed dextrous dingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:03:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed dextrous dingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756302172
|
Loder-S
| 2025-08-27T14:04:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly knobby tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:04:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly knobby tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jcruse248/promptwal
|
jcruse248
| 2025-08-27T14:04:16Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-27T14:04:16Z |
---
license: apache-2.0
---
|
brayn0009/mark_ai
|
brayn0009
| 2025-08-27T14:01:35Z | 0 | 0 | null |
[
"marketing",
"business",
"en",
"fr",
"ar",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"license:gemma",
"region:us"
] | null | 2025-08-27T13:53:41Z |
---
license: gemma
language:
- en
- fr
- ar
base_model:
- mistralai/Mistral-7B-Instruct-v0.3
tags:
- marketing
- business
---
|
felixZzz/student_sft_len32k_sub1k_multiZ_acc_mixw8_calib-0827
|
felixZzz
| 2025-08-27T14:01:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T13:56:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756303202
|
Ferdi3425
| 2025-08-27T14:00:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T14:00:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fromthesky/PLDR-LLM-v51-110M-5
|
fromthesky
| 2025-08-27T13:59:54Z | 0 | 0 | null |
[
"safetensors",
"pldrllm",
"text-generation",
"large-language-model",
"power-law-decoder-representations",
"power-law-graph-attention",
"pldr-llm",
"kv-cache",
"g-cache",
"kvg-cache",
"pytorch",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2502.13502",
"arxiv:2306.01116",
"arxiv:2101.00027",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-02-23T08:16:10Z |
---
language:
- en
tags:
- text-generation
- large-language-model
- power-law-decoder-representations
- power-law-graph-attention
- pldr-llm
- kv-cache
- g-cache
- kvg-cache
- pytorch
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
---
# PLDR-LLM-v51-110M-5
## Model Description
PLDR-LLM-v51-110M-5 is a large language model from power law decoder representations with KV-cache and G-cache support, which is a new foundational language model architecture that utilizes power law graph attention to generate deductive and inductive outputs. This model has a parameter size of 110M. It refers to PLDRv51-110M-5 whose architecture and training details are provided in Table 1 of the research paper titled [PLDR-LLMs Learn A Generalizable Tensor Operator That Can Replace Its Own Deep Neural Net At Inference](https://arxiv.org/abs/2502.13502).
## Training data
PLDR-LLM-v51-110M-5 was pretrained on the [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a publicly available English web dataset with extensive filtering and deduplication.
## Training procedure
This model was trained for ~8B tokens on RefinedWeb over 250k steps per rank. It was trained autoregressively with cross-entropy loss.
## Intended Use and Limitations
This model is intended to be used for research purposes. Given text as input prompt, it carries out next token prediction to generate continuation text. The context length for this model is 1024 tokens.
## How to Use
### Via Huggingface Transformers Library
PLDR-LLM has custom model support for Huggingface Transformers library. PLDR-LLM custom models support is developed on Transformers v4.55.4 release available at the time.
Using `pipeline`:
```python
from transformers import pipeline
pipeline = pipeline(
task="text-generation",
model="fromthesky/PLDR-LLM-v51-110M-5",
device="cuda"
)
prompt="PLDR-LLM is a large language model architecture developed by Fromthesky Research Labs."
output=pipeline(prompt, top_p=0.6, top_k=0, temperature=1, do_sample=True, max_new_tokens=100)
print(output[0]["generated_text"])
```
Using `AutoModel`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device="cuda" # or "cpu"
model=AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path="fromthesky/PLDR-LLM-v51-110M-5",
device_map=device,
trust_remote_code=True
)
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path="fromthesky/PLDR-LLM-v51-110M-5",
add_eos_token=False,
Legacy=False,
trust_remote_code=True
)
prompt="PLDR-LLM is a large language model architecture developed by Fromthesky Research Labs."
inputs = tokenizer([prompt], return_tensors="pt").to(device=device)
generated_ids = model.generate(**inputs,
max_new_tokens=100,
top_p=0.6,
top_k=0,
temperature=1,
do_sample=True,
use_cache=True
)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
#### PLDR-LLM specific configurations:
- `custom_G_type`: `None` for learned G values during pretraining, `'identity'` for LLM with SDPA equivalent, `'random'` for G values from a random normal distribution, `'external'` for custom G values that can be assigned after model initialization. This setting is more important for training purposes, for inference it is set in the model config.json file.
- `cache_first_G`: For batched inference, if set to `True`, cache G values from the first sample prompt in batch for all samples. If set to `False`, cache G values separately for each sample prompts in batch. For contrastive generation with `custom_G_value=None`, this needs to be set to `True`.
- `reference_rope`: If set to `True`, RoPE implementation implemented in the original paper is used. This is the case for model pretrained in this repo. If set to `False`, RoPE implementation from the Huggingface Transformers library is used.
- `output_pldr_attentions=True` returns the deductive outputs and learnable parameters of power law graph attention module as tuple containing:
the output of the residual metric learner (metric tensor, **A**), output (**A<sub>LM</sub>**) after application of iSwiGLU on metric tensor, learned exponents of potential tensor, learned weights for energy-curvature tensor, learned bias for energy-curvature tensor, energy-curvature tensor (**G<sub>LM</sub>**), and attention weights.
See config.json for other model configuration details.
#### Notes:
- Transformers v4.55.4 causes generation with quantized cache to fail at the time of this writing.
To overcome this issue, install the most recent updates from transformers library:
```python
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e ".[dev]"
```
We also have a fork of transformers library with PLDR-LLM model support for future development. The PLDR-LLM model files are added to the library so custom model files are not necessary.
```python
git clone https://github.com/burcgokden/transformers
cd transformers
git checkout add_PLDR_LLM
pip install -e ".[dev]"
```
- Static cache is not supported for models with `custom_G_type=None`.
- When `add_bos_token=False` and `add_eos_token=False` are set for the tokenizer model, prompt `""` is an invalid input for single batch inference as it doesn't contain any tokens. When padding is enabled, batched inference with prompt `""` as one of the samples causes its `input_ids` to be pad tokens and `attention_mask` to be all zeros. This edge case is handled differently for `_attn_implementation='eager'` and `'sdpa'`, resulting in different generation outputs for this prompt. Setting `add_bos_token=True`, `add_eos_token=True` or explicitly providing prompt as `"[PAD]"`, `"[START]"`, or `"[END]"` gives same output for either implementation. This issue does not affect KV-cache and G-cache.
### Via Original Implementation
- The original model implementation files can be found in the folder named `paper_saved_model_files/`. The model checkpoint and tokenizer can be loaded into the PLDR-LLM framework to generate text as described in the code repository for training this model: [PLDR-LLM-with-KVG-cache](https://github.com/burcgokden/PLDR-LLM-with-KVG-cache).
### LM Evaluation Harness Support
- The model can be used with a fork of LM-Evaluation-Harness Suite with PLDR-LLM with KV-cache and G-cache support: [lm-evaluation-harness-with-PLDR-LLM-kvg-cache](https://github.com/burcgokden/lm-evaluation-harness-with-PLDR-LLM-kvg-cache).
### Limitations and Biases
Large Language Models may generate text that is profane, lewd, socially unacceptable or offensive based on the contents of the dataset it was pretrained. RefinedWeb is a dataset that is as toxic and biased as the Pile. Please see the papers for [RefinedWeb](https://arxiv.org/abs/2306.01116) and [the Pile](https://arxiv.org/pdf/2101.00027) for more information. Moreover, large language models are also susceptible to hallucinations and may generate text that contains incorrect, irrelevant or misleading information. Since it is very hard to expect the contents of generated text ahead of time, the output of the large language models need to be heavily moderated and curated to avoid undesired content to appear without warning.
## Eval results
- The evaluation results on benchmarks with zero-shot setting and their comparison to LLM models of similar size reported in the literature can be found in Tables 3-5 and 7 of the [research paper](https://arxiv.org/abs/2502.13502).
- For implementation via huggingface transformers library, evaluating on the same benchmark suite gives same results as in the paper for all benchmarks, except for PIQA score being slightly higher at 61.75 for this model.
### BibTeX entry and citation info
Please cite this model as:
```bibtex
@misc{gokden2025pldrllmkvgcache,
title={PLDR-LLMs Learn A Generalizable Tensor Operator That Can Replace Its Own Deep Neural Net At Inference},
author={Burc Gokden},
year={2025},
eprint={2502.13502},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.13502},
}
```
|
vishal87er/blockassist-bc-gilded_grassy_elk_1756303059
|
vishal87er
| 2025-08-27T13:58:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gilded grassy elk",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:58:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gilded grassy elk
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qwersdfvg/blockassist-bc-alert_melodic_swan_1756303073
|
qwersdfvg
| 2025-08-27T13:58:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert melodic swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:57:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert melodic swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nannnzk/task-13-Qwen-Qwen2.5-3B-Instruct
|
nannnzk
| 2025-08-27T13:58:11Z | 141 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-08-08T04:14:10Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
volkantifozi/vg
|
volkantifozi
| 2025-08-27T13:57:55Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-27T13:57:55Z |
---
license: apache-2.0
---
|
fujiantiiazhraa/blockassist-bc-marine_robust_bee_1756301535
|
fujiantiiazhraa
| 2025-08-27T13:57:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"marine robust bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:57:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- marine robust bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756302942
|
Dejiat
| 2025-08-27T13:56:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:56:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qwersdfvg/blockassist-bc-tricky_mottled_whale_1756302922
|
qwersdfvg
| 2025-08-27T13:55:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tricky mottled whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:55:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tricky mottled whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1756301272
|
coelacanthxyz
| 2025-08-27T13:55:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:55:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
annasoli/gemma-2-9b-it_SV_l20_lr5e-4_a256_KL1e6
|
annasoli
| 2025-08-27T13:54:48Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-27T13:54:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
qwersdfvg/blockassist-bc-mighty_moist_barracuda_1756302853
|
qwersdfvg
| 2025-08-27T13:54:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mighty moist barracuda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:54:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mighty moist barracuda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756302791
|
xinnn32
| 2025-08-27T13:53:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:53:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fromthesky/PLDR-LLM-v51-110M-2
|
fromthesky
| 2025-08-27T13:53:31Z | 0 | 0 | null |
[
"safetensors",
"pldrllm",
"text-generation",
"large-language-model",
"power-law-decoder-representations",
"power-law-graph-attention",
"pldr-llm",
"kv-cache",
"g-cache",
"kvg-cache",
"pytorch",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2502.13502",
"arxiv:2306.01116",
"arxiv:2101.00027",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-02-23T08:04:14Z |
---
language:
- en
tags:
- text-generation
- large-language-model
- power-law-decoder-representations
- power-law-graph-attention
- pldr-llm
- kv-cache
- g-cache
- kvg-cache
- pytorch
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
---
# PLDR-LLM-v51-110M-2
## Model Description
PLDR-LLM-v51-110M-2 is a large language model from power law decoder representations with KV-cache and G-cache support, which is a new foundational language model architecture that utilizes power law graph attention to generate deductive and inductive outputs. This model has a parameter size of 110M. It refers to PLDRv51-110M-2 whose architecture and training details are provided in Table 1 of the research paper titled [PLDR-LLMs Learn A Generalizable Tensor Operator That Can Replace Its Own Deep Neural Net At Inference](https://arxiv.org/abs/2502.13502).
## Training data
PLDR-LLM-v51-110M-2 was pretrained on the [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a publicly available English web dataset with extensive filtering and deduplication.
## Training procedure
This model was trained for ~8B tokens on RefinedWeb over 250k steps per rank. It was trained autoregressively with cross-entropy loss.
## Intended Use and Limitations
This model is intended to be used for research purposes. Given text as input prompt, it carries out next token prediction to generate continuation text. The context length for this model is 1024 tokens.
## How to Use
### Via Huggingface Transformers Library
PLDR-LLM has custom model support for Huggingface Transformers library. PLDR-LLM custom models support is developed on Transformers v4.55.4 release available at the time.
Using `pipeline`:
```python
from transformers import pipeline
pipeline = pipeline(
task="text-generation",
model="fromthesky/PLDR-LLM-v51-110M-2",
device="cuda"
)
prompt="PLDR-LLM is a large language model architecture developed by Fromthesky Research Labs."
output=pipeline(prompt, top_p=0.6, top_k=0, temperature=1, do_sample=True, max_new_tokens=100)
print(output[0]["generated_text"])
```
Using `AutoModel`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device="cuda" # or "cpu"
model=AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path="fromthesky/PLDR-LLM-v51-110M-2",
device_map=device,
trust_remote_code=True
)
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path="fromthesky/PLDR-LLM-v51-110M-2",
add_eos_token=False,
Legacy=False,
trust_remote_code=True
)
prompt="PLDR-LLM is a large language model architecture developed by Fromthesky Research Labs."
inputs = tokenizer([prompt], return_tensors="pt").to(device=device)
generated_ids = model.generate(**inputs,
max_new_tokens=100,
top_p=0.6,
top_k=0,
temperature=1,
do_sample=True,
use_cache=True
)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
#### PLDR-LLM specific configurations:
- `custom_G_type`: `None` for learned G values during pretraining, `'identity'` for LLM with SDPA equivalent, `'random'` for G values from a random normal distribution, `'external'` for custom G values that can be assigned after model initialization. This setting is more important for training purposes, for inference it is set in the model config.json file.
- `cache_first_G`: For batched inference, if set to `True`, cache G values from the first sample prompt in batch for all samples. If set to `False`, cache G values separately for each sample prompts in batch. For contrastive generation with `custom_G_value=None`, this needs to be set to `True`.
- `reference_rope`: If set to `True`, RoPE implementation implemented in the original paper is used. This is the case for model pretrained in this repo. If set to `False`, RoPE implementation from the Huggingface Transformers library is used.
- `output_pldr_attentions=True` returns the deductive outputs and learnable parameters of power law graph attention module as tuple containing:
the output of the residual metric learner (metric tensor, **A**), output (**A<sub>LM</sub>**) after application of iSwiGLU on metric tensor, learned exponents of potential tensor, learned weights for energy-curvature tensor, learned bias for energy-curvature tensor, energy-curvature tensor (**G<sub>LM</sub>**), and attention weights.
See config.json for other model configuration details.
#### Notes:
- Transformers v4.55.4 causes generation with quantized cache to fail at the time of this writing.
To overcome this issue, install the most recent updates from transformers library:
```python
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e ".[dev]"
```
We also have a fork of transformers library with PLDR-LLM model support for future development. The PLDR-LLM model files are added to the library so custom model files are not necessary.
```python
git clone https://github.com/burcgokden/transformers
cd transformers
git checkout add_PLDR_LLM
pip install -e ".[dev]"
```
- Static cache is not supported for models with `custom_G_type=None`.
- When `add_bos_token=False` and `add_eos_token=False` are set for the tokenizer model, prompt `""` is an invalid input for single batch inference as it doesn't contain any tokens. When padding is enabled, batched inference with prompt `""` as one of the samples causes its `input_ids` to be pad tokens and `attention_mask` to be all zeros. This edge case is handled differently for `_attn_implementation='eager'` and `'sdpa'`, resulting in different generation outputs for this prompt. Setting `add_bos_token=True`, `add_eos_token=True` or explicitly providing prompt as `"[PAD]"`, `"[START]"`, or `"[END]"` gives same output for either implementation. This issue does not affect KV-cache and G-cache.
### Via Original Implementation
- The original model implementation files can be found in the folder named `paper_saved_model_files/`. The model checkpoint and tokenizer can be loaded into the PLDR-LLM framework to generate text as described in the code repository for training this model: [PLDR-LLM-with-KVG-cache](https://github.com/burcgokden/PLDR-LLM-with-KVG-cache).
### LM Evaluation Harness Support
- The model can be used with a fork of LM-Evaluation-Harness Suite with PLDR-LLM with KV-cache and G-cache support: [lm-evaluation-harness-with-PLDR-LLM-kvg-cache](https://github.com/burcgokden/lm-evaluation-harness-with-PLDR-LLM-kvg-cache).
### Limitations and Biases
Large Language Models may generate text that is profane, lewd, socially unacceptable or offensive based on the contents of the dataset it was pretrained. RefinedWeb is a dataset that is as toxic and biased as the Pile. Please see the papers for [RefinedWeb](https://arxiv.org/abs/2306.01116) and [the Pile](https://arxiv.org/pdf/2101.00027) for more information. Moreover, large language models are also susceptible to hallucinations and may generate text that contains incorrect, irrelevant or misleading information. Since it is very hard to expect the contents of generated text ahead of time, the output of the large language models need to be heavily moderated and curated to avoid undesired content to appear without warning.
## Eval results
- The evaluation results on benchmarks with zero-shot setting and their comparison to LLM models of similar size reported in the literature can be found in Tables 3-5 and 7 of the [research paper](https://arxiv.org/abs/2502.13502).
- For implementation via huggingface transformers library, evaluating on the same benchmark suite gives same results as in the paper for all benchmarks, except for PIQA score being slightly higher at 62.30 for this model.
### BibTeX entry and citation info
Please cite this model as:
```bibtex
@misc{gokden2025pldrllmkvgcache,
title={PLDR-LLMs Learn A Generalizable Tensor Operator That Can Replace Its Own Deep Neural Net At Inference},
author={Burc Gokden},
year={2025},
eprint={2502.13502},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.13502},
}
```
|
qwersdfvg/blockassist-bc-dense_unseen_komodo_1756302708
|
qwersdfvg
| 2025-08-27T13:52:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dense unseen komodo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:51:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dense unseen komodo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qwersdfvg/blockassist-bc-reclusive_scruffy_gibbon_1756302649
|
qwersdfvg
| 2025-08-27T13:51:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive scruffy gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:50:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive scruffy gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qwersdfvg/blockassist-bc-deft_silent_flamingo_1756302575
|
qwersdfvg
| 2025-08-27T13:50:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deft silent flamingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:49:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deft silent flamingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
acidjp/blockassist-bc-pesty_extinct_prawn_1756300085
|
acidjp
| 2025-08-27T13:48:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:48:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
domdom/checkpoint
|
domdom
| 2025-08-27T13:47:38Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T12:06:09Z |
---
library_name: transformers
model_name: checkpoint
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for checkpoint
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="domdom/checkpoint", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
MaziyarPanahi/Qwen3-30B-A3B-Instruct-2507-GGUF
|
MaziyarPanahi
| 2025-08-27T13:47:16Z | 0 | 0 | null |
[
"gguf",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:Qwen/Qwen3-30B-A3B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-30B-A3B-Instruct-2507",
"region:us",
"conversational"
] |
text-generation
| 2025-08-27T12:08:42Z |
---
base_model: Qwen/Qwen3-30B-A3B-Instruct-2507
inference: false
model_creator: Qwen
model_name: Qwen3-30B-A3B-Instruct-2507-GGUF
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
---
# [MaziyarPanahi/Qwen3-30B-A3B-Instruct-2507-GGUF](https://huggingface.co/MaziyarPanahi/Qwen3-30B-A3B-Instruct-2507-GGUF)
- Model creator: [Qwen](https://huggingface.co/Qwen)
- Original model: [Qwen/Qwen3-30B-A3B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507)
## Description
[MaziyarPanahi/Qwen3-30B-A3B-Instruct-2507-GGUF](https://huggingface.co/MaziyarPanahi/Qwen3-30B-A3B-Instruct-2507-GGUF) contains GGUF format model files for [Qwen/Qwen3-30B-A3B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756302393
|
Ferdi3425
| 2025-08-27T13:47:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:47:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-5-v2_9887
|
luckeciano
| 2025-08-27T13:47:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T09:47:53Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-5-v2_9887
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-5-v2_9887
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-5-v2_9887", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/mhp5p99n)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
srideepalla/sd-onnx-models
|
srideepalla
| 2025-08-27T13:46:20Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-27T13:09:47Z |
---
license: apache-2.0
---
|
SelmaNajih001/ModelloGRPORagMinstral
|
SelmaNajih001
| 2025-08-27T13:45:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation",
"en",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T11:36:28Z |
---
library_name: transformers
language:
- en
base_model:
- mistralai/Mistral-7B-Instruct-v0.1
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
qwersdfvg/blockassist-bc-shaggy_gilded_falcon_1756302289
|
qwersdfvg
| 2025-08-27T13:45:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shaggy gilded falcon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:44:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shaggy gilded falcon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/PaperPrediction-LLM-1B-GGUF
|
mradermacher
| 2025-08-27T13:44:16Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:PaperPred/PaperPrediction-LLM-1B",
"base_model:quantized:PaperPred/PaperPrediction-LLM-1B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-27T13:03:46Z |
---
base_model: PaperPred/PaperPrediction-LLM-1B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/PaperPred/PaperPrediction-LLM-1B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#PaperPrediction-LLM-1B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-1B-GGUF/resolve/main/PaperPrediction-LLM-1B.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-1B-GGUF/resolve/main/PaperPrediction-LLM-1B.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-1B-GGUF/resolve/main/PaperPrediction-LLM-1B.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-1B-GGUF/resolve/main/PaperPrediction-LLM-1B.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-1B-GGUF/resolve/main/PaperPrediction-LLM-1B.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-1B-GGUF/resolve/main/PaperPrediction-LLM-1B.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-1B-GGUF/resolve/main/PaperPrediction-LLM-1B.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-1B-GGUF/resolve/main/PaperPrediction-LLM-1B.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-1B-GGUF/resolve/main/PaperPrediction-LLM-1B.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-1B-GGUF/resolve/main/PaperPrediction-LLM-1B.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-1B-GGUF/resolve/main/PaperPrediction-LLM-1B.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-1B-GGUF/resolve/main/PaperPrediction-LLM-1B.f16.gguf) | f16 | 2.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hoanganhvutk31/mrpc-bert-finetuned
|
hoanganhvutk31
| 2025-08-27T13:44:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-27T13:43:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JiHBijou/SAFE_0826
|
JiHBijou
| 2025-08-27T13:43:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-26T04:02:18Z |
# SAFE Video Challenge Example Submission
The key requirements is to have a `script.py` file in the top level directory of the repo and optionally a `requirements.txt` file
For more details: https://safe-video-2025.dsri.org/#-model-submission
|
shery8595/blockassist-bc-solitary_hunting_ant_1756302093
|
shery8595
| 2025-08-27T13:42:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"solitary hunting ant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:41:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- solitary hunting ant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756302081
|
Ferdi3425
| 2025-08-27T13:41:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:41:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756302026
|
xinnn32
| 2025-08-27T13:41:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:40:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1756300232
|
lisaozill03
| 2025-08-27T13:38:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:38:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756300745
|
Sayemahsjn
| 2025-08-27T13:37:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:37:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hunchteller/sft_qwen2_loc
|
hunchteller
| 2025-08-27T13:37:01Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-27T12:12:31Z |
---
base_model: Qwen/Qwen2-VL-2B-Instruct
library_name: transformers
model_name: sft_qwen2_loc
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for sft_qwen2_loc
This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hunchteller/sft_qwen2_loc", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
qwersdfvg/blockassist-bc-tenacious_rugged_cheetah_1756301751
|
qwersdfvg
| 2025-08-27T13:36:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tenacious rugged cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:35:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tenacious rugged cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1756300185
|
elmenbillion
| 2025-08-27T13:35:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:35:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Corianas/char128_shift_tokenizer
|
Corianas
| 2025-08-27T13:35:38Z | 0 | 0 | null |
[
"en",
"license:mit",
"region:us"
] | null | 2025-08-27T13:10:59Z |
---
language:
- en
license: mit
---
# char128-shift Tokenizer
A fixed-size Hugging Face–compatible **character tokenizer** with a dedicated **SHIFT** token (`↨`) to represent uppercase letters. Instead of assigning separate tokens to uppercase `A–Z`, each uppercase is encoded as `↨` + lowercase (e.g., `H` → `↨h`).
This repository contains the ready-to-use tokenizer, which can be loaded with `AutoTokenizer`, as well as the script that made it (in src\ folder)
---
## Features
* **Fixed 128-token vocabulary** (including specials).
* **Uppercase encoding via SHIFT token**, no duplicate uppercase letters in vocab.
* **WordLevel model** with explicit closed character set.
* **Pre-tokenizer** splits by Unicode grapheme clusters (`\X`), so emoji and diacritics are preserved.
* **Normalizer** maps `A–Z` → `↨` + lowercase explicitly.
* **Decoder** concatenates tokens directly (no extra spaces).
---
## Installation
You only need `transformers` (for Python interface) and optionally `tokenizers` (for advanced building).
```bash
pip install transformers>=4.40 tokenizers>=0.14
```
No PyTorch/TensorFlow/Flax required to use the tokenizer itself.
---
## Usage
### Load from local folder
```python
from transformers import AutoTokenizer
# Load local tokenizer folder
tok = AutoTokenizer.from_pretrained("char128_shift_tokenizer")
print(tok.vocab_size) # 128
ids = tok.encode("Hello, There!\n<eos>")
print(ids)
print(tok.decode(ids, skip_special_tokens=True, clean_up_tokenization_spaces=False))
# → "↨hello, ↨there!\n<eos>"
```
### Load from Hugging Face Hub
```python
from transformers import AutoTokenizer
# Replace with your Hub repo
tok = AutoTokenizer.from_pretrained("Corianas/char128_shift_tokenizer")
```
---
## Restoring Uppercase
The decode output will show SHIFT markers (e.g., `↨h`). For display, restore casing:
```python
def restore_uppercase(s: str, shift="↨"):
out, i, n = [], 0, len(s)
while i < n:
if s[i] == shift and i+1 < n and s[i+1] != shift:
out.append(s[i+1].upper()); i += 2
else:
out.append(s[i]); i += 1
return "".join(out)
ids = tok.encode("Hello, There!\n<eos>")
decoded = tok.decode(ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded) # "↨hello, ↨there!\n<eos>"
print(restore_uppercase(decoded)) # "Hello, There!\n<eos>"
```
---
## Vocabulary
The 128 tokens include:
* **Lowercase letters** `a–z`
* **Digits** `0–9`
* **Whitespace** (space, `\n`, `\t`)
* **Punctuation and symbols** (configurable)
* **Diacritics** like `è`, `é` if needed
* **Special tokens** `<pad>`, `<unk>`, `<bos>`, `<eos>`
* **SHIFT token** `↨`
Uppercase `A–Z` are **not** in vocab — they are represented via SHIFT.
---
## Integration
For dataset preparation:
```python
import numpy as np, os
from transformers import AutoTokenizer
tok = AutoTokenizer.from_pretrained("char128_shift_tokenizer")
with open("input.txt", "r", encoding="utf-8") as f:
data = f.read()
n = len(data)
train_txt, val_txt = data[:int(0.9*n)], data[int(0.9*n):]
train_ids = tok.encode(train_txt)
val_ids = tok.encode(val_txt)
np.array(train_ids, dtype=np.uint16).tofile("train.bin")
np.array(val_ids, dtype=np.uint16).tofile("val.bin")
```
Your model’s `vocab_size` must match (128).
---
## Known Edge Cases
* **Non-ASCII uppercase** (like `À`, `É`) are lowercased without SHIFT unless you add explicit rules.
* **Spaces in decode** are disabled by setting decoder to concat; if you see them, ensure your tokenizer was saved with `tok.decoder = decoders.Sequence([])`.
* **Unknown chars** → `<unk>`. Ensure your vocab includes everything you expect.
---
## License
MIT
---
## Example Test
```python
from transformers import AutoTokenizer
tok = AutoTokenizer.from_pretrained("Corianas/char128_shift_tokenizer")
ids = tok.encode("Hello, There!\n<eos>")
print(ids)
print(tok.decode(ids, skip_special_tokens=True, clean_up_tokenization_spaces=False))
# ↨hello, ↨there!\n<eos>
```
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756301707
|
Dejiat
| 2025-08-27T13:35:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:35:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756301676
|
xinnn32
| 2025-08-27T13:35:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T13:35:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
annasoli/gemma-2-9b-it_SV_l20_lr5e-4_a256_nKL
|
annasoli
| 2025-08-27T13:34:54Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-27T13:34:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.