modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-22 00:45:16
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 570
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-22 00:43:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
onnxmodelzoo/inception_v4_Opset18
|
onnxmodelzoo
| 2025-09-19T17:42:32Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:41:58Z |
---
language: en
license: apache-2.0
model_name: inception_v4_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/inception_resnet_v2_Opset18
|
onnxmodelzoo
| 2025-09-19T17:40:59Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:40:36Z |
---
language: en
license: apache-2.0
model_name: inception_resnet_v2_Opset18.onnx
tags:
- Computer_Vision
---
|
Jariixjarox/Qwen3-0.6B-Gensyn-Swarm-hairy_striped_worm
|
Jariixjarox
| 2025-09-19T17:39:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am hairy_striped_worm",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T17:38:52Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am hairy_striped_worm
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Marorelunen/Qwen3-0.6B-Gensyn-Swarm-scurrying_fluffy_chameleon
|
Marorelunen
| 2025-09-19T17:38:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am scurrying_fluffy_chameleon",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T17:37:43Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am scurrying_fluffy_chameleon
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
onnxmodelzoo/ig_resnext101_32x16d_Opset16
|
onnxmodelzoo
| 2025-09-19T17:35:48Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:35:24Z |
---
language: en
license: apache-2.0
model_name: ig_resnext101_32x16d_Opset16.onnx
tags:
- Computer_Vision
---
|
b1n1yam/addis-ai-50k-vocab-mistral-7b-v0.3-tok
|
b1n1yam
| 2025-09-19T17:35:09Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T17:35:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
onnxmodelzoo/hrnet_w48_Opset18
|
onnxmodelzoo
| 2025-09-19T17:33:47Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:33:27Z |
---
language: en
license: apache-2.0
model_name: hrnet_w48_Opset18.onnx
tags:
- Computer_Vision
---
|
WenFengg/MOes20Sat_14_4
|
WenFengg
| 2025-09-19T17:32:51Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-19T17:32:09Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
onnxmodelzoo/hrnet_w44_Opset17
|
onnxmodelzoo
| 2025-09-19T17:32:18Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:31:58Z |
---
language: en
license: apache-2.0
model_name: hrnet_w44_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/hrnet_w32_Opset16
|
onnxmodelzoo
| 2025-09-19T17:30:12Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:29:57Z |
---
language: en
license: apache-2.0
model_name: hrnet_w32_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/hrnet_w30_Opset17
|
onnxmodelzoo
| 2025-09-19T17:29:43Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:29:28Z |
---
language: en
license: apache-2.0
model_name: hrnet_w30_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/hrnet_w18_small_v2_Opset18
|
onnxmodelzoo
| 2025-09-19T17:29:13Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:29:03Z |
---
language: en
license: apache-2.0
model_name: hrnet_w18_small_v2_Opset18.onnx
tags:
- Computer_Vision
---
|
WenFengg/MOes20Sat_14_3
|
WenFengg
| 2025-09-19T17:28:42Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-19T17:28:03Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758302352
|
schooncestiaa
| 2025-09-19T17:20:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-19T17:20:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
onnxmodelzoo/hrnet_w18_Opset16
|
onnxmodelzoo
| 2025-09-19T17:19:25Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:19:16Z |
---
language: en
license: apache-2.0
model_name: hrnet_w18_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/gluon_senet154_Opset17
|
onnxmodelzoo
| 2025-09-19T17:16:12Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:15:49Z |
---
language: en
license: apache-2.0
model_name: gluon_senet154_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/gluon_resnext50_32x4d_Opset18
|
onnxmodelzoo
| 2025-09-19T17:15:24Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:15:16Z |
---
language: en
license: apache-2.0
model_name: gluon_resnext50_32x4d_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/gluon_resnext101_32x4d_Opset18
|
onnxmodelzoo
| 2025-09-19T17:13:57Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:13:44Z |
---
language: en
license: apache-2.0
model_name: gluon_resnext101_32x4d_Opset18.onnx
tags:
- Computer_Vision
---
|
hyongok2/command-r-35b
|
hyongok2
| 2025-09-19T17:13:56Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T15:20:11Z |
---
license: apache-2.0
---
|
onnxmodelzoo/gluon_resnet152_v1s_Opset17
|
onnxmodelzoo
| 2025-09-19T17:10:37Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:10:22Z |
---
language: en
license: apache-2.0
model_name: gluon_resnet152_v1s_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/gluon_resnet152_v1s_Opset16
|
onnxmodelzoo
| 2025-09-19T17:10:22Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:10:04Z |
---
language: en
license: apache-2.0
model_name: gluon_resnet152_v1s_Opset16.onnx
tags:
- Computer_Vision
---
|
david4096/agro-all-MiniLM-L6-v2_concat_gcn_h128_o64_triplet_e256_knowledge-3
|
david4096
| 2025-09-19T17:09:56Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"knowledge-enhanced",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T17:09:52Z |
---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- knowledge-enhanced
pipeline_tag: sentence-similarity
---
# agro_all-MiniLM-L6-v2_concat_gcn_h128_o64_triplet_e256_knowledge
This is a knowledge-enhanced sentence transformer model created with [on2vec](https://github.com/davidandrzej/on2vec).
## Model Details
- **Base Model**: sentence-transformers/all-MiniLM-L6-v2
- **Architecture**: Knowledge-Enhanced Transformer (experimental)
- **Knowledge Dim**: 256
- **Max Concepts**: 3
- **Created with**: on2vec knowledge-enhanced architecture
## Usage
⚠️ **Note**: This is an experimental knowledge-enhanced model that requires special handling.
```python
# This model cannot be loaded with standard SentenceTransformer.load()
# Contact the model creator for usage instructions
```
## Architecture
This model uses a fundamentally different approach than standard fusion models:
- Token embeddings are enhanced with ontology knowledge during forward pass
- End-to-end training in unified representation space
- No separate lookup/fusion step
Generated by on2vec knowledge-enhanced transformer.
|
onnxmodelzoo/gluon_resnet101_v1s_Opset18
|
onnxmodelzoo
| 2025-09-19T17:06:42Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:06:31Z |
---
language: en
license: apache-2.0
model_name: gluon_resnet101_v1s_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/gluon_resnet101_v1d_Opset18
|
onnxmodelzoo
| 2025-09-19T17:06:02Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:05:51Z |
---
language: en
license: apache-2.0
model_name: gluon_resnet101_v1d_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/gluon_resnet101_v1d_Opset17
|
onnxmodelzoo
| 2025-09-19T17:05:51Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:05:36Z |
---
language: en
license: apache-2.0
model_name: gluon_resnet101_v1d_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/gluon_resnet101_v1c_Opset17
|
onnxmodelzoo
| 2025-09-19T17:05:11Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:04:56Z |
---
language: en
license: apache-2.0
model_name: gluon_resnet101_v1c_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/gluon_inception_v3_Opset16
|
onnxmodelzoo
| 2025-09-19T17:03:44Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:03:35Z |
---
language: en
license: apache-2.0
model_name: gluon_inception_v3_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/gernet_l_Opset18
|
onnxmodelzoo
| 2025-09-19T17:02:47Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T17:02:38Z |
---
language: en
license: apache-2.0
model_name: gernet_l_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/fasterrcnn_resnet50_fpn_v2_Opset17
|
onnxmodelzoo
| 2025-09-19T16:58:55Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T16:58:25Z |
---
language: en
license: apache-2.0
model_name: fasterrcnn_resnet50_fpn_v2_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/ens_adv_inception_resnet_v2_Opset16
|
onnxmodelzoo
| 2025-09-19T16:56:26Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T16:56:12Z |
---
language: en
license: apache-2.0
model_name: ens_adv_inception_resnet_v2_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/efficientnetv2_rw_s_Opset16
|
onnxmodelzoo
| 2025-09-19T16:55:51Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T16:55:43Z |
---
language: en
license: apache-2.0
model_name: efficientnetv2_rw_s_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/efficientnet_lite0_Opset18
|
onnxmodelzoo
| 2025-09-19T16:54:49Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T16:54:45Z |
---
language: en
license: apache-2.0
model_name: efficientnet_lite0_Opset18.onnx
tags:
- Computer_Vision
---
|
ellisdoro/apollo_sv-all-MiniLM-L6-v2_concat_gcn_h128_o64_triplet_e100_knowledge-k
|
ellisdoro
| 2025-09-19T16:49:23Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"knowledge-enhanced",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T16:49:19Z |
---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- knowledge-enhanced
pipeline_tag: sentence-similarity
---
# apollo_sv_all-MiniLM-L6-v2_concat_gcn_h128_o64_triplet_e100_knowledge
This is a knowledge-enhanced sentence transformer model created with [on2vec](https://github.com/davidandrzej/on2vec).
## Model Details
- **Base Model**: sentence-transformers/all-MiniLM-L6-v2
- **Architecture**: Knowledge-Enhanced Transformer (experimental)
- **Knowledge Dim**: 256
- **Max Concepts**: 3
- **Created with**: on2vec knowledge-enhanced architecture
## Usage
⚠️ **Note**: This is an experimental knowledge-enhanced model that requires special handling.
```python
# This model cannot be loaded with standard SentenceTransformer.load()
# Contact the model creator for usage instructions
```
## Architecture
This model uses a fundamentally different approach than standard fusion models:
- Token embeddings are enhanced with ontology knowledge during forward pass
- End-to-end training in unified representation space
- No separate lookup/fusion step
Generated by on2vec knowledge-enhanced transformer.
|
ellisdoro/afpo-all-MiniLM-L6-v2_concat_gcn_h128_o64_triplet_e100_knowledge-k
|
ellisdoro
| 2025-09-19T16:49:02Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"knowledge-enhanced",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T16:48:58Z |
---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- knowledge-enhanced
pipeline_tag: sentence-similarity
---
# afpo_all-MiniLM-L6-v2_concat_gcn_h128_o64_triplet_e100_knowledge
This is a knowledge-enhanced sentence transformer model created with [on2vec](https://github.com/davidandrzej/on2vec).
## Model Details
- **Base Model**: sentence-transformers/all-MiniLM-L6-v2
- **Architecture**: Knowledge-Enhanced Transformer (experimental)
- **Knowledge Dim**: 256
- **Max Concepts**: 3
- **Created with**: on2vec knowledge-enhanced architecture
## Usage
⚠️ **Note**: This is an experimental knowledge-enhanced model that requires special handling.
```python
# This model cannot be loaded with standard SentenceTransformer.load()
# Contact the model creator for usage instructions
```
## Architecture
This model uses a fundamentally different approach than standard fusion models:
- Token embeddings are enhanced with ontology knowledge during forward pass
- End-to-end training in unified representation space
- No separate lookup/fusion step
Generated by on2vec knowledge-enhanced transformer.
|
walkenone/Qwen3-0.6B-Gensyn-Swarm-lithe_stubby_chicken
|
walkenone
| 2025-09-19T16:42:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am lithe_stubby_chicken",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T16:41:51Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am lithe_stubby_chicken
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
powerpump32/lenamonetti
|
powerpump32
| 2025-09-19T16:42:24Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-19T12:56:58Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Lenamonetti
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/powerpump32/lenamonetti/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('powerpump32/lenamonetti', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/powerpump32/lenamonetti/discussions) to add images that show off what you’ve made with this LoRA.
|
techparasite/RMBGFast
|
techparasite
| 2025-09-19T16:40:41Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T20:43:27Z |
---
license: apache-2.0
---
|
alesiaivanova/Llama-3B-GRPO-new-1-sub-main-2-sub-1024-3-sub-1536-lr-2e-6-4-sub-1792-lr-5e-7
|
alesiaivanova
| 2025-09-19T16:31:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T16:28:59Z |
---
library_name: transformers
model_name: Llama-3B-GRPO-new-1-sub-main-2-sub-1024-3-sub-1536-lr-2e-6-4-sub-1792-lr-5e-7
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Llama-3B-GRPO-new-1-sub-main-2-sub-1024-3-sub-1536-lr-2e-6-4-sub-1792-lr-5e-7
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alesyaivanova/long-horizon-reasoning/runs/4ram8rke)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.3
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
alshafeay/my-finetuned-bert2_next
|
alshafeay
| 2025-09-19T16:26:39Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-19T16:19:43Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my-finetuned-bert2_next
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-finetuned-bert2_next
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 196 | 0.2928 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
g-assismoraes/Qwen3-4B-gdirectDelta-stack-a0.8
|
g-assismoraes
| 2025-09-19T16:10:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T14:53:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OmAlve/Vaarta-Base
|
OmAlve
| 2025-09-19T16:10:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:HuggingFaceTB/SmolLM2-360M",
"base_model:finetune:HuggingFaceTB/SmolLM2-360M",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T16:10:22Z |
---
base_model: HuggingFaceTB/SmolLM2-360M
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** OmAlve
- **License:** apache-2.0
- **Finetuned from model :** HuggingFaceTB/SmolLM2-360M
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AmberYifan/llama3-8b-full-pretrain-junk-tweet-1m-en-sft-40k
|
AmberYifan
| 2025-09-19T16:00:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:AmberYifan/llama3-8b-full-pretrain-junk-tweet-1m-en",
"base_model:finetune:AmberYifan/llama3-8b-full-pretrain-junk-tweet-1m-en",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T15:09:46Z |
---
library_name: transformers
license: llama3
base_model: AmberYifan/llama3-8b-full-pretrain-junk-tweet-1m-en
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: llama3-8b-full-pretrain-junk-tweet-1m-en-sft-40k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-full-pretrain-junk-tweet-1m-en-sft-40k
This model is a fine-tuned version of [AmberYifan/llama3-8b-full-pretrain-junk-tweet-1m-en](https://huggingface.co/AmberYifan/llama3-8b-full-pretrain-junk-tweet-1m-en) on the alpaca_en dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758297437
|
schooncestiaa
| 2025-09-19T15:58:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-19T15:58:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Danilo20203/654321
|
Danilo20203
| 2025-09-19T15:58:23Z | 296 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-09-17T17:28:15Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/Captura de pantalla 2025-09-17 122706.png
text: '-'
base_model: Qwen/Qwen-Image
instance_prompt: null
license: apache-2.0
---
# kdndyhrb1458
<Gallery />
## Download model
[Download](/Danilo20203/654321/tree/main) them in the Files & versions tab.
|
jasonhuang3/99-caldpo-qwen-2-5-7b-math-lora-0918
|
jasonhuang3
| 2025-09-19T15:29:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T17:12:43Z |
---
base_model: Qwen/Qwen2.5-Math-7B
library_name: transformers
model_name: 99-caldpo-qwen-2-5-7b-math-lora-0918
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for 99-caldpo-qwen-2-5-7b-math-lora-0918
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jasonhuang3/99-caldpo-qwen-2-5-7b-math-lora-0918", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jasonhuang3-school/huggingface/runs/regg031k)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.50.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758295599
|
schooncestiaa
| 2025-09-19T15:28:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-19T15:27:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aamijar/Mistral-7B-Instruct-v0.3-lora-r8-sst2-epochs2
|
aamijar
| 2025-09-19T15:16:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T15:16:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Fenilorkeleox/Qwen3-0.6B-Gensyn-Swarm-darting_scavenging_lemur
|
Fenilorkeleox
| 2025-09-19T15:11:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am darting_scavenging_lemur",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T15:11:35Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am darting_scavenging_lemur
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rabeeqasem/q-FrozenLake-v1-4x4-noSlippery
|
rabeeqasem
| 2025-09-19T15:10:05Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-19T15:10:01Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rabeeqasem/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Xenirorkelear/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-timid_patterned_barracuda
|
Xenirorkelear
| 2025-09-19T15:09:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am timid_patterned_barracuda",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T15:08:44Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am timid_patterned_barracuda
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AirSintez/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-solitary_insectivorous_orangutan
|
AirSintez
| 2025-09-19T15:08:37Z | 151 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am solitary_insectivorous_orangutan",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T06:24:45Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am solitary_insectivorous_orangutan
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758294347
|
schooncestiaa
| 2025-09-19T15:07:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-19T15:06:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alejamp/sam2_repo
|
alejamp
| 2025-09-19T15:02:22Z | 0 | 0 | null |
[
"sam2",
"arxiv:2304.02643",
"region:us"
] | null | 2025-09-19T14:27:39Z |
# SAM2 ID Segmenter
Lightweight wrapper and fine‑tuning scaffold around Meta's Segment Anything 2 (SAM2) adapted to segment structured regions in ID / document images (e.g. portrait, number field, security areas). The repository currently focuses on: (1) reproducible loading of a fine‑tuned SAM2 checkpoint, (2) automatic multi‑mask generation + tight cropping, and (3) configuration file driven training/inference settings.
> Status: Inference wrapper implemented (`SamSegmentator`). End‑to‑end training loop is a planned addition. Config already anticipates training hyper‑parameters.
---
## Contents
1. Motivation & Scope
2. Intended Use & Non‑Goals
3. Repository Structure
4. Configuration (`config.json`)
5. Installation
6. Inference Usage (`SamSegmentator`)
7. Dataset & Mask Format (planned training)
8. Checkpoints & Auto‑Download
9. Metrics (recommended)
10. Limitations & Risks
11. Roadmap
12. License & Citation
---
## 1. Motivation & Scope
Document / ID workflows often need fast class‑agnostic region extraction (for OCR, redaction, or downstream classifiers). SAM2 provides strong general mask proposals; this project wraps it to directly yield cropped image + mask pairs ordered by area and optionally padded.
## 2. Intended Use & Non‑Goals
Intended:
- Pre‑segmentation of ID / document fields prior to OCR.
- Selective anonymization / redaction pipelines (masking faces, MRZ, barcodes, etc.).
- Rapid prototyping for custom fine‑tuning of SAM2 on a small set of document classes.
Non‑Goals:
- Biometric identity verification or authoritative fraud detection.
- Legal decision making without human review.
- Full multi‑modal extraction (text recognition is out of scope here).
## 3. Repository Structure
```
model_repo/
config.json # Central hyper‑parameter & path config
README.md # (this file)
checkpoints/ # Local downloaded / fine‑tuned checkpoints
samples/
sample_us_passport.jpg
src/
sam_segmentator.py # Inference wrapper (SamSegmentator)
main.py # Placeholder entry point
```
Planned: `train/` scripts for fine‑tuning (not yet implemented).
## 4. Configuration (`model_repo/config.json`)
Key fields (example values included in the repo):
- `model_type`: Always `sam2` here.
- `checkpoint_path`: Path relative to project root or absolute; if omitted and `auto_download=True` the code will attempt remote download.
- `image_size`: Target square size used during training (future). Inference wrapper accepts raw image size.
- `num_classes`, `class_names`: For supervised training (future); not required by the current automatic mask generator, but kept for consistency.
- `augmentation`, `loss`, `optimizer`, `lr_scheduler`: Reserved for training loop integration.
- `paths`: Expected dataset layout for training: `data/train/images`, `data/train/masks`, etc.
- `mixed_precision`: Will enable `torch.autocast` during training.
Even if not all fields are consumed now, keeping them centralized avoids future breaking refactors.
## 5. Installation
### Prerequisites
- Python 3.10+ (recommended)
- CUDA GPU (optional but recommended for speed)
### Using uv (preferred fast resolver)
If `pyproject.toml` is present (it is), you can do:
```
uv sync
```
This creates / updates the virtual environment and installs dependencies.
### Using pip (alternative)
```
python -m venv .venv
.venv\Scripts\activate
pip install -U pip
pip install -e .
```
If SAM2 is not a published package in your environment, you may need to install it from source (instructions will depend on the upstream SAM2 repository—add here when finalized).
## 6. Inference Usage (`SamSegmentator`)
Minimal example using the sample passport image:
```python
import cv2
from pathlib import Path
from src.sam_segmentator import SamSegmentator
image_path = Path("samples/sample_us_passport.jpg")
img_bgr = cv2.imread(str(image_path)) # BGR (OpenCV)
segmentator = SamSegmentator(
checkpoint_path="checkpoints/sam2.1_hiera_base_plus_ft_ids.pt", # or None to auto-download if configured
pred_iou_thresh=0.88, # forwarded to SAM2AutomaticMaskGenerator
stability_score_thresh=0.90,
)
segments = segmentator.infer(img_bgr, pad_percent=0.05)
print(f"Total segments: {len(segments)}")
# Each segment is (crop_bgr, mask_255)
for i, (crop, mask) in enumerate(segments[:3]):
cv2.imwrite(f"outputs/segment_{i}_crop.png", crop)
cv2.imwrite(f"outputs/segment_{i}_mask.png", mask)
```
Output: pairs of tightly cropped images and their binary masks (0 background, 255 foreground), sorted by mask area descending.
### Parameter Notes
- `pad_percent`: Relative padding (default 5%) added around each tight bounding box.
- Deprecated `pad` (absolute pixels) still accepted but will warn.
- All additional kwargs go to `SAM2AutomaticMaskGenerator` (e.g., `box_nms_thresh`, `min_mask_region_area`).
## 7. Dataset & Mask Format (For Future Training)
Expected layout (mirrors `paths` in config):
```
data/
train/
images/*.jpg|png
masks/*.png # Single‑channel, integer indices (0=background)
val/
images/
masks/
```
Class index mapping (example):
```
class_names = ["ID1", "ID3", "IDCOVER"]
0 -> background
1 -> ID1
2 -> ID3
3 -> IDCOVER
```
Masks should use nearest‑neighbor safe compression (PNG). Avoid palette mismatch; explicit integer pixel values are recommended.
## 8. Checkpoints & Auto‑Download
`SamSegmentator` will:
1. Use provided `checkpoint_path` if it exists.
2. If none is provided and `auto_download=True`, download the default checkpoint to `checkpoints/` using an environment configured URL (`SAM2_CHECKPOINT_URL`).
3. (Optional) Validate SHA256 if `SAM2_CHECKPOINT_SHA256` is set.
Environment variables:
```
SAM2_CHECKPOINT_URL=<direct_download_url>
SAM2_CHECKPOINT_SHA256=<hex>
SAM2_CHECKPOINT_DIR=checkpoints
```
## 9. Metrics (Recommended When Training Added)
- Mean IoU (per class & macro average)
- Dice coefficient
- Pixel accuracy
- Class frequency distribution (to inform potential class weighting)
Store per‑epoch metrics as JSON for reproducibility.
## 10. Limitations & Risks
Technical:
- Current version does not include a fine‑tuning script; only inference wrapper.
- Automatic mask generator is class‑agnostic; without fine‑tuning it may over‑segment or miss tiny fields.
Ethical / Compliance:
- Processing ID documents may involve PII; ensure secure storage and compliant handling.
- Not intended for biometric decisions nor identity verification pipelines without human oversight.
## 11. Roadmap
- [ ] Add training script (supervised fine‑tuning using `config.json`).
- [ ] Optional class‑guided prompting (points / boxes) pipeline.
- [ ] Export to ONNX / TorchScript.
- [ ] CLI interface for batch folder inference.
- [ ] Lightweight web demo (Gradio / FastAPI).
## 12. License & Citation
Specify a license in a top‑level `LICENSE` file (e.g., MIT or Apache‑2.0) ensuring compatibility with SAM2's original license.
Please cite SAM / SAM2 in academic work. Example (placeholder):
```
@article{kirillov2023segmentanything,
title={Segment Anything},
author={Kirillov, Alexander and others},
journal={arXiv preprint arXiv:2304.02643},
year={2023}
}
```
Add updated SAM2 citation once official reference is finalized.
## Acknowledgments
- Meta AI for releasing Segment Anything & SAM2.
- OpenCV, PyTorch, and the broader CV community.
---
If you have questions or need feature prioritization, open an Issue or start a Discussion.
|
kibaraki/wav2vec2-large-xlsr-53-shinekhen-buryat-random
|
kibaraki
| 2025-09-19T14:54:38Z | 6 | 0 | null |
[
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"dataset:kibaraki/Shinekhen-Buryat",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:cc-by-sa-4.0",
"region:us"
] |
automatic-speech-recognition
| 2025-09-17T20:58:33Z |
---
license: cc-by-sa-4.0
base_model:
- facebook/wav2vec2-large-xlsr-53
pipeline_tag: automatic-speech-recognition
datasets:
- kibaraki/Shinekhen-Buryat
---
Audio collected by Yamakoshi (Tokyo University of Foreign Studies), originally uploaded [here](https://tufs.repo.nii.ac.jp/search?search_type=2&q=1729497608274) (CC BY-SA 4.0).
Audio is converted to per-sentence audio clips.
fl_e30_b4_lr1e-4_cer_random873+shib
|
AngieJ1974/Scorpio
|
AngieJ1974
| 2025-09-19T14:54:15Z | 0 | 0 | null |
[
"license:cdla-permissive-2.0",
"region:us"
] | null | 2025-09-19T14:54:15Z |
---
license: cdla-permissive-2.0
---
|
moyixiao/Qwen3-0.6B-gspo-f16-200
|
moyixiao
| 2025-09-19T14:45:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T14:44:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MattBou00/llama-3-2-1b-detox_RETRY_scale10_Round1-checkpoint-epoch-20
|
MattBou00
| 2025-09-19T14:44:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-19T14:42:55Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-19_14-40-16/checkpoints/checkpoint-epoch-20")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-19_14-40-16/checkpoints/checkpoint-epoch-20")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-19_14-40-16/checkpoints/checkpoint-epoch-20")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Bojun-Feng/Qwen2.5-32B-Instruct-GGUF-llamafile
|
Bojun-Feng
| 2025-09-19T14:29:11Z | 31 | 0 | null |
[
"llamafile",
"chat",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-02-24T20:53:03Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct-GGUF/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-32B-Instruct
tags:
- chat
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64a523ba1ed90082dafde3d3/kJrkxofwOp-89uYFe0EBb.png" alt="LlamaFile" style="width: 50%; min-width: 400px; display: block; margin: auto;">
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
I am not the original creator of llamafile, all credit of llamafile goes to Jartine:
<!-- README_llamafile.md-about-llamafile end -->
<!-- repositories-available start -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/FwAVVu7eJ4">Chat & support: jartine's Discord server</a></p>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">jartine's LLM work is generously supported by a grant from <a href="https://mozilla.org">mozilla</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Qwen2.5 32B Instruct GGUF - llamafile
## Run LLMs locally with a single file - No installation required!
All you need is download a file and run it.
Our goal is to make open source large language models much more
accessible to both developers and end users. We're doing that by
combining [llama.cpp](https://github.com/ggerganov/llama.cpp) with [Cosmopolitan Libc](https://github.com/jart/cosmopolitan) into one
framework that collapses all the complexity of LLMs down to
a single-file executable (called a "llamafile") that runs
locally on most computers, with no installation.
## How to Use (Modified from [Git README](https://github.com/Mozilla-Ocho/llamafile/tree/8f73d39cf3a767897b8ade6dda45e5744c62356a?tab=readme-ov-file#quickstart))
The easiest way to try it for yourself is to download our example llamafile.
With llamafile, all inference happens locally; no data ever leaves your computer.
1. Download the llamafile.
2. Open your computer's terminal.
3. If you're using macOS, Linux, or BSD, you'll need to grant permission
for your computer to execute this new file. (You only need to do this
once.)
```sh
chmod +x qwen2.5-32b-instruct-q8_0.gguf
```
4. If you're on Windows, rename the file by adding ".exe" on the end.
5. Run the llamafile. e.g.:
```sh
./qwen2.5-32b-instruct-q8_0.gguf
```
6. Your browser should open automatically and display a chat interface.
(If it doesn't, just open your browser and point it at http://localhost:8080.)
7. When you're done chatting, return to your terminal and hit
`Control-C` to shut down llamafile.
Note: Hugging Face has a 50GB file upload Limit, so you may need to use the `cat` instruction to concatenate large llamafiles to run them.
Here is an example doing so to `Mozilla/Meta-Llama-3.1-405B-Instruct-llamafile`:
```
wget https://huggingface.co/Mozilla/Meta-Llama-3.1-405B-llamafile/resolve/main/Meta-Llama-3.1-405B.Q2_K.cat0.llamafile
wget https://huggingface.co/Mozilla/Meta-Llama-3.1-405B-llamafile/resolve/main/Meta-Llama-3.1-405B.Q2_K.cat1.llamafile
wget https://huggingface.co/Mozilla/Meta-Llama-3.1-405B-llamafile/resolve/main/Meta-Llama-3.1-405B.Q2_K.cat2.llamafile
wget https://huggingface.co/Mozilla/Meta-Llama-3.1-405B-llamafile/resolve/main/Meta-Llama-3.1-405B.Q2_K.cat3.llamafile
cat Meta-Llama-3.1-405B.Q2_K.cat{0,1,2,3}.llamafile >Meta-Llama-3.1-405B.Q2_K.llamafile
rm Meta-Llama-3.1-405B.Q2_K.cat*.llamafile
chmod +x Meta-Llama-3.1-405B.Q2_K.llamafile
./Meta-Llama-3.1-405B.Q2_K.llamafile
```
Please note that LlamaFile is still under active development. Some methods may be not be compatible with the most recent documents.
## Settings for Qwen2.5 32B Instruct GGUF Llamafiles
- Model creator: [Qwen](https://huggingface.co/Qwen)
- Quantized GGUF files used: [Qwen/Qwen2.5-32B-Instruct-GGUF](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct-GGUF/tree/a15e3cc10f8bbb2c0af6f8f1f34a32e3b060c09d)
- Commit message "upload fp16 weights"
- Commit hash a15e3cc10f8bbb2c0af6f8f1f34a32e3b060c09d
- LlamaFile version used: [Mozilla-Ocho/llamafile](https://github.com/Mozilla-Ocho/llamafile/tree/29b5f27172306da39a9c70fe25173da1b1564f82)
- Commit message "Merge pull request #687 from Xydane/main Add Support for DeepSeek-R1 models"
- Commit hash 29b5f27172306da39a9c70fe25173da1b1564f82
- `.args` content format (example):
```
-m
qwen2.5-32b-instruct-q8_0.gguf
...
```
## (Following is original model card for Qwen2.5 32B Instruct GGUF)
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
# Qwen2.5-32B-Instruct-GGUF
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 32B Qwen2.5 model in the GGUF Format**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
- Note: Currently, only vLLM supports YARN for length extrapolating. If you want to process sequences up to 131,072 tokens, please refer to non-GGUF models.
- Quantization: q2_K, q3_K_M, q4_0, q4_K_M, q5_0, q5_K_M, q6_K, q8_0
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
Check out our [llama.cpp documentation](https://qwen.readthedocs.io/en/latest/run_locally/llama.cpp.html) for more usage guide.
We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp.
In the following demonstration, we assume that you are running commands under the repository `llama.cpp`.
Since cloning the entire repo may be inefficient, you can manually download the GGUF file that you need or use `huggingface-cli`:
1. Install
```shell
pip install -U huggingface_hub
```
2. Download:
```shell
huggingface-cli download Qwen/Qwen2.5-32B-Instruct-GGUF --include "qwen2.5-32b-instruct-q5_k_m*.gguf" --local-dir . --local-dir-use-symlinks False
```
For large files, we split them into multiple segments due to the limitation of file upload. They share a prefix, with a suffix indicating its index. For examples, `qwen2.5-32b-instruct-q5_k_m-00001-of-00006.gguf` to `qwen2.5-32b-instruct-q5_k_m-00006-of-00006.gguf`. The above command will download all of them.
3. (Optional) Merge:
For split files, you need to merge them first with the command `llama-gguf-split` as shown below:
```bash
# ./llama-gguf-split --merge <first-split-file-path> <merged-file-path>
./llama-gguf-split --merge qwen2.5-32b-instruct-q5_k_m-00001-of-00006.gguf qwen2.5-32b-instruct-q5_k_m.gguf
```
For users, to achieve chatbot-like experience, it is recommended to commence in the conversation mode:
```shell
./llama-cli -m <gguf-file-path> \
-co -cnv -p "You are Qwen, created by Alibaba Cloud. You are a helpful assistant." \
-fa -ngl 80 -n 512
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For quantized models, the benchmark results against the original bfloat16 models can be found [here](https://qwen.readthedocs.io/en/latest/benchmark/quantization_benchmark.html)
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
tommycik/ControlNetHedNew
|
tommycik
| 2025-09-19T14:21:40Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"flux",
"flux-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-16T11:28:58Z |
---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: other
inference: true
tags:
- flux
- flux-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
- flux
- flux-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-tommycik/ControlNetHedNew
These are controlnet weights trained on black-forest-labs/FLUX.1-dev with new type of conditioning.
You can find some example images below.
prompt: transparent cocktail galss with elegant stem and a double curved bowl on a white background

## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md)
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
djinn-anthrope/python-code-completion-mistral-24B
|
djinn-anthrope
| 2025-09-19T14:17:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T14:16:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
admiralakber/gemma-3n-E2B-it-Q4_0-GGUF
|
admiralakber
| 2025-09-19T14:10:04Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"automatic-speech-recognition",
"automatic-speech-translation",
"audio-text-to-text",
"video-text-to-text",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:google/gemma-3n-E2B-it",
"base_model:quantized:google/gemma-3n-E2B-it",
"license:gemma",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-19T14:09:48Z |
---
license: gemma
library_name: transformers
pipeline_tag: image-text-to-text
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3n-E2B-it
tags:
- automatic-speech-recognition
- automatic-speech-translation
- audio-text-to-text
- video-text-to-text
- llama-cpp
- gguf-my-repo
---
# admiralakber/gemma-3n-E2B-it-Q4_0-GGUF
This model was converted to GGUF format from [`google/gemma-3n-E2B-it`](https://huggingface.co/google/gemma-3n-E2B-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-3n-E2B-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo admiralakber/gemma-3n-E2B-it-Q4_0-GGUF --hf-file gemma-3n-e2b-it-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo admiralakber/gemma-3n-E2B-it-Q4_0-GGUF --hf-file gemma-3n-e2b-it-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo admiralakber/gemma-3n-E2B-it-Q4_0-GGUF --hf-file gemma-3n-e2b-it-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo admiralakber/gemma-3n-E2B-it-Q4_0-GGUF --hf-file gemma-3n-e2b-it-q4_0.gguf -c 2048
```
|
Diogo2303/whisper-medium-F5-Adult-50h-1epoch
|
Diogo2303
| 2025-09-19T14:06:30Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"pt",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T11:59:45Z |
---
language:
- pt
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
model-index:
- name: Whisper MEDIUM Adult 50h
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper MEDIUM Adult 50h
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the 800 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.14.0
|
MattBou00/llama-3-2-1b-detox_RETRY_scale10_Round3-checkpoint-epoch-60
|
MattBou00
| 2025-09-19T14:05:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-19T14:03:57Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-19_13-52-41/checkpoints/checkpoint-epoch-60")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-19_13-52-41/checkpoints/checkpoint-epoch-60")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-19_13-52-41/checkpoints/checkpoint-epoch-60")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
WaiLwin/j
|
WaiLwin
| 2025-09-19T14:01:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T13:58:25Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: j
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# j
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3746
- Topology Accuracy: 0.9851
- Service Accuracy: 0.9435
- Combined Accuracy: 0.9643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Topology Accuracy | Service Accuracy | Combined Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:----------------:|:-----------------:|
| 1.016 | 1.0 | 64 | 0.9725 | 0.7411 | 0.6220 | 0.6815 |
| 0.7234 | 2.0 | 128 | 0.6385 | 0.9643 | 0.6935 | 0.8289 |
| 0.6038 | 3.0 | 192 | 0.5826 | 0.9345 | 0.7440 | 0.8393 |
| 0.5014 | 4.0 | 256 | 0.5192 | 0.9583 | 0.7738 | 0.8661 |
| 0.3959 | 5.0 | 320 | 0.4845 | 0.9732 | 0.7768 | 0.875 |
| 0.4165 | 6.0 | 384 | 0.4579 | 0.9762 | 0.8601 | 0.9182 |
| 0.3699 | 7.0 | 448 | 0.4156 | 0.9851 | 0.9286 | 0.9568 |
| 0.3272 | 8.0 | 512 | 0.3777 | 0.9851 | 0.9524 | 0.9688 |
| 0.3091 | 9.0 | 576 | 0.3714 | 0.9851 | 0.9464 | 0.9658 |
| 0.3092 | 10.0 | 640 | 0.3814 | 0.9821 | 0.9464 | 0.9643 |
| 0.3221 | 11.0 | 704 | 0.3811 | 0.9821 | 0.9405 | 0.9613 |
| 0.3033 | 12.0 | 768 | 0.3724 | 0.9851 | 0.9405 | 0.9628 |
| 0.304 | 13.0 | 832 | 0.3741 | 0.9881 | 0.9435 | 0.9658 |
| 0.3051 | 14.0 | 896 | 0.3743 | 0.9851 | 0.9435 | 0.9643 |
| 0.3039 | 15.0 | 960 | 0.3746 | 0.9851 | 0.9435 | 0.9643 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
RattusTeam/blockassist
|
RattusTeam
| 2025-09-19T13:35:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy powerful macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-19T13:35:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy powerful macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ricardo-teixeira9/Reinforce-CartPole-v1
|
ricardo-teixeira9
| 2025-09-19T13:28:40Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-19T13:10:29Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-mix-low-tweet-1m-en
|
AmberYifan
| 2025-09-19T13:18:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T08:22:42Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2.5-0.5b-instruct-full-pretrain-mix-low-tweet-1m-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2.5-0.5b-instruct-full-pretrain-mix-low-tweet-1m-en
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the mix_low_tweet_1m_en dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
XiangyuWen/qwen2.5-3b-finetuned-cnn_dailymail
|
XiangyuWen
| 2025-09-19T13:17:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T12:53:45Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: transformers
model_name: qwen2.5-3b-finetuned-cnn_dailymail
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2.5-3b-finetuned-cnn_dailymail
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="XiangyuWen/qwen2.5-3b-finetuned-cnn_dailymail", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/artiease-muse/huggingface/runs/lbsx63u6)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/PUGC-Mistral-DPO-GGUF
|
mradermacher
| 2025-09-19T12:33:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-19T12:33:19Z |
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Zhaoxuan/PUGC-Mistral-DPO
|
alexisriot/qwen3-06b
|
alexisriot
| 2025-09-19T12:28:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-ranking",
"arxiv:2506.05176",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-ranking
| 2025-09-19T12:22:55Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-0.6B-Base
library_name: transformers
pipeline_tag: text-ranking
---
# Qwen3-Reranker-0.6B
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/logo_qwen3.png" width="400"/>
<p>
## Highlights
The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This series inherits the exceptional multilingual capabilities, long-text understanding, and reasoning skills of its foundational model. The Qwen3 Embedding series represents significant advancements in multiple text embedding and ranking tasks, including text retrieval, code retrieval, text classification, text clustering, and bitext mining.
**Exceptional Versatility**: The embedding model has achieved state-of-the-art performance across a wide range of downstream application evaluations. The 8B size embedding model ranks No.1 in the MTEB multilingual leaderboard (as of June 5, 2025, score 70.58), while the reranking model excels in various text retrieval scenarios.
**Comprehensive Flexibility**: The Qwen3 Embedding series offers a full spectrum of sizes (from 0.6B to 8B) for both embedding and reranking models, catering to diverse use cases that prioritize efficiency and effectiveness. Developers can seamlessly combine these two modules. Additionally, the embedding model allows for flexible vector definitions across all dimensions, and both embedding and reranking models support user-defined instructions to enhance performance for specific tasks, languages, or scenarios.
**Multilingual Capability**: The Qwen3 Embedding series offer support for over 100 languages, thanks to the multilingual capabilites of Qwen3 models. This includes various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities.
## Model Overview
**Qwen3-Reranker-0.6B** has the following features:
- Model Type: Text Reranking
- Supported Languages: 100+ Languages
- Number of Paramaters: 0.6B
- Context Length: 32k
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-embedding/), [GitHub](https://github.com/QwenLM/Qwen3-Embedding).
## Qwen3 Embedding Series Model list
| Model Type | Models | Size | Layers | Sequence Length | Embedding Dimension | MRL Support | Instruction Aware |
|------------------|----------------------|------|--------|-----------------|---------------------|-------------|----------------|
| Text Embedding | [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) | 0.6B | 28 | 32K | 1024 | Yes | Yes |
| Text Embedding | [Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B) | 4B | 36 | 32K | 2560 | Yes | Yes |
| Text Embedding | [Qwen3-Embedding-8B](https://huggingface.co/Qwen/Qwen3-Embedding-8B) | 8B | 36 | 32K | 4096 | Yes | Yes |
| Text Reranking | [Qwen3-Reranker-0.6B](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) | 0.6B | 28 | 32K | - | - | Yes |
| Text Reranking | [Qwen3-Reranker-4B](https://huggingface.co/Qwen/Qwen3-Reranker-4B) | 4B | 36 | 32K | - | - | Yes |
| Text Reranking | [Qwen3-Reranker-8B](https://huggingface.co/Qwen/Qwen3-Reranker-8B) | 8B | 36 | 32K | - | - | Yes |
> **Note**:
> - `MRL Support` indicates whether the embedding model supports custom dimensions for the final embedding.
> - `Instruction Aware` notes whether the embedding or reranking model supports customizing the input instruction according to different tasks.
> - Our evaluation indicates that, for most downstream tasks, using instructions (instruct) typically yields an improvement of 1% to 5% compared to not using them. Therefore, we recommend that developers create tailored instructions specific to their tasks and scenarios. In multilingual contexts, we also advise users to write their instructions in English, as most instructions utilized during the model training process were originally written in English.
## Usage
With Transformers versions earlier than 4.51.0, you may encounter the following error:
```
KeyError: 'qwen3'
```
### Transformers Usage
```python
# Requires transformers>=4.51.0
import torch
from transformers import AutoModel, AutoTokenizer, AutoModelForCausalLM
def format_instruction(instruction, query, doc):
if instruction is None:
instruction = 'Given a web search query, retrieve relevant passages that answer the query'
output = "<Instruct>: {instruction}\n<Query>: {query}\n<Document>: {doc}".format(instruction=instruction,query=query, doc=doc)
return output
def process_inputs(pairs):
inputs = tokenizer(
pairs, padding=False, truncation='longest_first',
return_attention_mask=False, max_length=max_length - len(prefix_tokens) - len(suffix_tokens)
)
for i, ele in enumerate(inputs['input_ids']):
inputs['input_ids'][i] = prefix_tokens + ele + suffix_tokens
inputs = tokenizer.pad(inputs, padding=True, return_tensors="pt", max_length=max_length)
for key in inputs:
inputs[key] = inputs[key].to(model.device)
return inputs
@torch.no_grad()
def compute_logits(inputs, **kwargs):
batch_scores = model(**inputs).logits[:, -1, :]
true_vector = batch_scores[:, token_true_id]
false_vector = batch_scores[:, token_false_id]
batch_scores = torch.stack([false_vector, true_vector], dim=1)
batch_scores = torch.nn.functional.log_softmax(batch_scores, dim=1)
scores = batch_scores[:, 1].exp().tolist()
return scores
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-Reranker-0.6B", padding_side='left')
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-Reranker-0.6B").eval()
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-Reranker-0.6B", torch_dtype=torch.float16, attn_implementation="flash_attention_2").cuda().eval()
token_false_id = tokenizer.convert_tokens_to_ids("no")
token_true_id = tokenizer.convert_tokens_to_ids("yes")
max_length = 8192
prefix = "<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be \"yes\" or \"no\".<|im_end|>\n<|im_start|>user\n"
suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"
prefix_tokens = tokenizer.encode(prefix, add_special_tokens=False)
suffix_tokens = tokenizer.encode(suffix, add_special_tokens=False)
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = ["What is the capital of China?",
"Explain gravity",
]
documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
]
pairs = [format_instruction(task, query, doc) for query, doc in zip(queries, documents)]
# Tokenize the input texts
inputs = process_inputs(pairs)
scores = compute_logits(inputs)
print("scores: ", scores)
```
### vLLM Usage
```python
# Requires vllm>=0.8.5
import logging
from typing import Dict, Optional, List
import json
import logging
import torch
from transformers import AutoTokenizer, is_torch_npu_available
from vllm import LLM, SamplingParams
from vllm.distributed.parallel_state import destroy_model_parallel
import gc
import math
from vllm.inputs.data import TokensPrompt
def format_instruction(instruction, query, doc):
text = [
{"role": "system", "content": "Judge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be \"yes\" or \"no\"."},
{"role": "user", "content": f"<Instruct>: {instruction}\n\n<Query>: {query}\n\n<Document>: {doc}"}
]
return text
def process_inputs(pairs, instruction, max_length, suffix_tokens):
messages = [format_instruction(instruction, query, doc) for query, doc in pairs]
messages = tokenizer.apply_chat_template(
messages, tokenize=True, add_generation_prompt=False, enable_thinking=False
)
messages = [ele[:max_length] + suffix_tokens for ele in messages]
messages = [TokensPrompt(prompt_token_ids=ele) for ele in messages]
return messages
def compute_logits(model, messages, sampling_params, true_token, false_token):
outputs = model.generate(messages, sampling_params, use_tqdm=False)
scores = []
for i in range(len(outputs)):
final_logits = outputs[i].outputs[0].logprobs[-1]
token_count = len(outputs[i].outputs[0].token_ids)
if true_token not in final_logits:
true_logit = -10
else:
true_logit = final_logits[true_token].logprob
if false_token not in final_logits:
false_logit = -10
else:
false_logit = final_logits[false_token].logprob
true_score = math.exp(true_logit)
false_score = math.exp(false_logit)
score = true_score / (true_score + false_score)
scores.append(score)
return scores
number_of_gpu = torch.cuda.device_count()
tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Reranker-0.6B')
model = LLM(model='Qwen/Qwen3-Reranker-0.6B', tensor_parallel_size=number_of_gpu, max_model_len=10000, enable_prefix_caching=True, gpu_memory_utilization=0.8)
tokenizer.padding_side = "left"
tokenizer.pad_token = tokenizer.eos_token
suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"
max_length=8192
suffix_tokens = tokenizer.encode(suffix, add_special_tokens=False)
true_token = tokenizer("yes", add_special_tokens=False).input_ids[0]
false_token = tokenizer("no", add_special_tokens=False).input_ids[0]
sampling_params = SamplingParams(temperature=0,
max_tokens=1,
logprobs=20,
allowed_token_ids=[true_token, false_token],
)
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = ["What is the capital of China?",
"Explain gravity",
]
documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
]
pairs = list(zip(queries, documents))
inputs = process_inputs(pairs, task, max_length-len(suffix_tokens), suffix_tokens)
scores = compute_logits(model, inputs, sampling_params, true_token, false_token)
print('scores', scores)
destroy_model_parallel()
```
📌 **Tip**: We recommend that developers customize the `instruct` according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an `instruct` on the query side can lead to a drop in retrieval performance by approximately 1% to 5%.
## Evaluation
| Model | Param | MTEB-R | CMTEB-R | MMTEB-R | MLDR | MTEB-Code | FollowIR |
|------------------------------------|--------|---------|---------|---------|--------|-----------|----------|
| **Qwen3-Embedding-0.6B** | 0.6B | 61.82 | 71.02 | 64.64 | 50.26 | 75.41 | 5.09 |
| Jina-multilingual-reranker-v2-base | 0.3B | 58.22 | 63.37 | 63.73 | 39.66 | 58.98 | -0.68 |
| gte-multilingual-reranker-base | 0.3B | 59.51 | 74.08 | 59.44 | 66.33 | 54.18 | -1.64 |
| BGE-reranker-v2-m3 | 0.6B | 57.03 | 72.16 | 58.36 | 59.51 | 41.38 | -0.01 |
| **Qwen3-Reranker-0.6B** | 0.6B | 65.80 | 71.31 | 66.36 | 67.28 | 73.42 | 5.41 |
| **Qwen3-Reranker-4B** | 4B | **69.76** | 75.94 | 72.74 | 69.97 | 81.20 | **14.84** |
| **Qwen3-Reranker-8B** | 8B | 69.02 | **77.45** | **72.94** | **70.19** | **81.22** | 8.05 |
> **Note**:
> - Evaluation results for reranking models. We use the retrieval subsets of MTEB(eng, v2), MTEB(cmn, v1), MMTEB and MTEB (Code), which are MTEB-R, CMTEB-R, MMTEB-R and MTEB-Code.
> - All scores are our runs based on the top-100 candidates retrieved by dense embedding model [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen3embedding,
title={Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models},
author={Zhang, Yanzhao and Li, Mingxin and Long, Dingkun and Zhang, Xin and Lin, Huan and Yang, Baosong and Xie, Pengjun and Yang, An and Liu, Dayiheng and Lin, Junyang and Huang, Fei and Zhou, Jingren},
journal={arXiv preprint arXiv:2506.05176},
year={2025}
}
```
|
yuhuili/EAGLE-Qwen2-72B-Instruct
|
yuhuili
| 2025-09-19T12:19:15Z | 33 | 1 | null |
[
"pytorch",
"qwen2",
"arxiv:2401.15077",
"arxiv:2406.16858",
"arxiv:2503.01840",
"license:apache-2.0",
"region:us"
] | null | 2024-08-07T17:44:12Z |
---
license: apache-2.0
---
<img src="figs/logo.png" alt="EAGLE" width="220" align="left"><div align="center"><h1> EAGLE</h1></div>
<p align="center">
| <a href="https://arxiv.org/pdf/2401.15077.pdf"><b>EAGLE</b></a> |
<a href="https://arxiv.org/pdf/2406.16858"><b>EAGLE-2</b></a> |
<a href="https://arxiv.org/pdf/2503.01840"><b>EAGLE-3</b></a> |
<a href="https://sites.google.com/view/
eagle-llm"><b>Blog</b></a> |
</p>
<p align="center">
<a href="">
<img src="https://img.shields.io/badge/Version-v3.0.0-orange.svg" alt="Version">
</a>
<a href="https://opensource.org/licenses/Apache-2.0">
<img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" alt="License">
</a>
<a href="https://github.com/SafeAILab/EAGLE/issues">
<img src="https://img.shields.io/badge/Maintained%3F-yes-green.svg" alt="Maintenance">
</a>
<a href="https://github.com/SafeAILab/EAGLE/pulls">
<img src="https://img.shields.io/badge/Contributions-welcome-brightgreen.svg?style=flat" alt="Contributions welcome">
</a>
</p>
##
<p align="center">
<img src="./figs/eagle3r.jpg" alt="benchmark" width="790">
</p>
EAGLE (Extrapolation Algorithm for Greater Language-model Efficiency) is a new baseline for fast decoding of Large Language Models (LLMs) with provable performance maintenance. This approach involves extrapolating the second-top-layer contextual feature vectors of LLMs, enabling a significant boost in generation efficiency.
- EAGLE is:
- certified by the <a href="https://github.com/hemingkx/Spec-Bench/blob/main/Leaderboard.md"><b>third-party</b></a> evaluation as the **fastest** speculative method so far.
- achieving **2x** speedup on <a href="https://github.com/pytorch-labs/gpt-fast"><b>gpt-fast</b></a>.
- **3x** faster than vanilla decoding (13B).
- **2x** faster than <a href="https://lmsys.org/blog/2023-11-21-lookahead-decoding/"><b>Lookahead</b></a> (13B).
- **1.6x** faster than <a href="https://sites.google.com/view/medusa-llm"><b>Medusa</b></a> (13B).
- provably maintaining the consistency with vanilla decoding in the distribution of generated texts.
- trainable (within 1-2 days) and testable on 8x RTX 3090 GPUs. So even the GPU poor can afford it.
- combinable with other parallelled techniques such as vLLM, DeepSpeed, Mamba, FlashAttention, quantization, and hardware optimization.
EAGLE-2 uses the confidence scores from the draft model to approximate acceptance rates, dynamically adjusting the draft tree structure, which further enhances performance.
- EAGLE-2 is:
- **4x** faster than vanilla decoding (13B).
- **1.4x** faster than EAGLE-1 (13B).
EAGLE-3 removes the feature prediction constraint in EAGLE and simulates this process during training using training-time testing. Considering that top-layer features are limited to next-token prediction, EAGLE-3 replaces them with a fusion of low-, mid-, and high-level semantic features.
EAGLE-3 further improves generation speed while ensuring lossless performance.
- EAGLE-3 is:
- **5.6** faster than vanilla decoding (13B).
- **1.8x** faster than EAGLE-1 (13B).
<p align="center">
<img src="./figs/e3.gif" alt="demogif" width="600">
</p>
_Inference is conducted on 2x RTX 3090 GPUs at fp16 precision using the Vicuna 13B model._
[//]: # ()
[//]: # ()
[//]: # (Using EAGLE-2, the inference speed on 2 RTX 3060 GPUs can be faster than vanilla autoregressive decoding on an A100 GPU.)
## Support
EAGLE has been merged in the following mainstream LLM serving frameworks (listed in alphabetical order).
- <a href="https://rocm.docs.amd.com/en/latest/">AMD ROCm</a>
- <a href="https://angelslim.readthedocs.io/zh-cn/latest/features/speculative_decoding/eagle.html">AngelSlim</a>
- <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/developer_guides/feature-guide.html#eagle-speculative-decoding">AWS NeuronX Distributed Core</a>
- <a href="https://github.com/OpenBMB/CPM.cu">CPM.cu</a>
- <a href="https://github.com/intel/intel-extension-for-transformers/pull/1504">Intel® Extension for Transformers</a>
- <a href="https://github.com/intel-analytics/ipex-llm/pull/11104">Intel® LLM Library for PyTorch</a>
- <a href="https://llm.mlc.ai/docs/deploy/rest.html">MLC-LLM</a>
- <a href="https://docs.nvidia.com/nemo-framework/user-guide/latest/model-optimization/speculative/speculative.html">NVIDIA NeMo Framework</a>
- <a href="https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/eagle">NVIDIA TensorRT-LLM</a>
- <a href="https://nvidia.github.io/TensorRT-Model-Optimizer/guides/7_speculative_decoding.html">NVIDIA TensorRT Model Optimizer</a>
- <a href="https://paddlenlp.readthedocs.io/en/latest/llm/docs/predict/speculative_decoding.html">PaddleNLP</a>
- <a href="https://docs.sglang.ai/advanced_features/speculative_decoding.html">SGLang</a>
- <a href="https://github.com/sgl-project/SpecForge">SpecForge</a>
- <a href="https://github.com/vllm-project/vllm/pull/16937">vLLM</a>
## Reference
For technical details and full experimental results, please check [the paper of EAGLE](https://arxiv.org/pdf/2401.15077.pdf), [the paper of EAGLE-2](https://arxiv.org/pdf/2406.16858), and [the paper of EAGLE-3](https://arxiv.org/pdf/2503.01840).
```
@inproceedings{li2024eagle,
author = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang},
title = {{EAGLE}: Speculative Sampling Requires Rethinking Feature Uncertainty},
booktitle = {International Conference on Machine Learning},
year = {2024}
}
@inproceedings{li2024eagle2,
author = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang},
title = {{EAGLE-2}: Faster Inference of Language Models with Dynamic Draft Trees},
booktitle = {Empirical Methods in Natural Language Processing},
year = {2024}
}
@inproceedings{li2025eagle3,
author = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang},
title = {{EAGLE-3}: Scaling up Inference Acceleration of Large Language Models via Training-Time Test},
booktitle = {Annual Conference on Neural Information Processing Systems},
year = {2025}
}
```
|
yuhuili/EAGLE3-LLaMA3.3-Instruct-70B
|
yuhuili
| 2025-09-19T12:14:33Z | 1,481 | 6 | null |
[
"pytorch",
"llama",
"arxiv:2401.15077",
"arxiv:2406.16858",
"arxiv:2503.01840",
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T04:40:00Z |
---
license: apache-2.0
---
<img src="figs/logo.png" alt="EAGLE" width="220" align="left"><div align="center"><h1> EAGLE</h1></div>
<p align="center">
| <a href="https://arxiv.org/pdf/2401.15077.pdf"><b>EAGLE</b></a> |
<a href="https://arxiv.org/pdf/2406.16858"><b>EAGLE-2</b></a> |
<a href="https://arxiv.org/pdf/2503.01840"><b>EAGLE-3</b></a> |
<a href="https://sites.google.com/view/
eagle-llm"><b>Blog</b></a> |
</p>
<p align="center">
<a href="">
<img src="https://img.shields.io/badge/Version-v3.0.0-orange.svg" alt="Version">
</a>
<a href="https://opensource.org/licenses/Apache-2.0">
<img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" alt="License">
</a>
<a href="https://github.com/SafeAILab/EAGLE/issues">
<img src="https://img.shields.io/badge/Maintained%3F-yes-green.svg" alt="Maintenance">
</a>
<a href="https://github.com/SafeAILab/EAGLE/pulls">
<img src="https://img.shields.io/badge/Contributions-welcome-brightgreen.svg?style=flat" alt="Contributions welcome">
</a>
</p>
##
<p align="center">
<img src="./figs/eagle3r.jpg" alt="benchmark" width="790">
</p>
EAGLE (Extrapolation Algorithm for Greater Language-model Efficiency) is a new baseline for fast decoding of Large Language Models (LLMs) with provable performance maintenance. This approach involves extrapolating the second-top-layer contextual feature vectors of LLMs, enabling a significant boost in generation efficiency.
- EAGLE is:
- certified by the <a href="https://github.com/hemingkx/Spec-Bench/blob/main/Leaderboard.md"><b>third-party</b></a> evaluation as the **fastest** speculative method so far.
- achieving **2x** speedup on <a href="https://github.com/pytorch-labs/gpt-fast"><b>gpt-fast</b></a>.
- **3x** faster than vanilla decoding (13B).
- **2x** faster than <a href="https://lmsys.org/blog/2023-11-21-lookahead-decoding/"><b>Lookahead</b></a> (13B).
- **1.6x** faster than <a href="https://sites.google.com/view/medusa-llm"><b>Medusa</b></a> (13B).
- provably maintaining the consistency with vanilla decoding in the distribution of generated texts.
- trainable (within 1-2 days) and testable on 8x RTX 3090 GPUs. So even the GPU poor can afford it.
- combinable with other parallelled techniques such as vLLM, DeepSpeed, Mamba, FlashAttention, quantization, and hardware optimization.
EAGLE-2 uses the confidence scores from the draft model to approximate acceptance rates, dynamically adjusting the draft tree structure, which further enhances performance.
- EAGLE-2 is:
- **4x** faster than vanilla decoding (13B).
- **1.4x** faster than EAGLE-1 (13B).
EAGLE-3 removes the feature prediction constraint in EAGLE and simulates this process during training using training-time testing. Considering that top-layer features are limited to next-token prediction, EAGLE-3 replaces them with a fusion of low-, mid-, and high-level semantic features.
EAGLE-3 further improves generation speed while ensuring lossless performance.
- EAGLE-3 is:
- **5.6** faster than vanilla decoding (13B).
- **1.8x** faster than EAGLE-1 (13B).
<p align="center">
<img src="./figs/e3.gif" alt="demogif" width="600">
</p>
_Inference is conducted on 2x RTX 3090 GPUs at fp16 precision using the Vicuna 13B model._
[//]: # ()
[//]: # ()
[//]: # (Using EAGLE-2, the inference speed on 2 RTX 3060 GPUs can be faster than vanilla autoregressive decoding on an A100 GPU.)
## Support
EAGLE has been merged in the following mainstream LLM serving frameworks (listed in alphabetical order).
- <a href="https://rocm.docs.amd.com/en/latest/">AMD ROCm</a>
- <a href="https://angelslim.readthedocs.io/zh-cn/latest/features/speculative_decoding/eagle.html">AngelSlim</a>
- <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/developer_guides/feature-guide.html#eagle-speculative-decoding">AWS NeuronX Distributed Core</a>
- <a href="https://github.com/OpenBMB/CPM.cu">CPM.cu</a>
- <a href="https://github.com/intel/intel-extension-for-transformers/pull/1504">Intel® Extension for Transformers</a>
- <a href="https://github.com/intel-analytics/ipex-llm/pull/11104">Intel® LLM Library for PyTorch</a>
- <a href="https://llm.mlc.ai/docs/deploy/rest.html">MLC-LLM</a>
- <a href="https://docs.nvidia.com/nemo-framework/user-guide/latest/model-optimization/speculative/speculative.html">NVIDIA NeMo Framework</a>
- <a href="https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/eagle">NVIDIA TensorRT-LLM</a>
- <a href="https://nvidia.github.io/TensorRT-Model-Optimizer/guides/7_speculative_decoding.html">NVIDIA TensorRT Model Optimizer</a>
- <a href="https://paddlenlp.readthedocs.io/en/latest/llm/docs/predict/speculative_decoding.html">PaddleNLP</a>
- <a href="https://docs.sglang.ai/advanced_features/speculative_decoding.html">SGLang</a>
- <a href="https://github.com/sgl-project/SpecForge">SpecForge</a>
- <a href="https://github.com/vllm-project/vllm/pull/16937">vLLM</a>
## Reference
For technical details and full experimental results, please check [the paper of EAGLE](https://arxiv.org/pdf/2401.15077.pdf), [the paper of EAGLE-2](https://arxiv.org/pdf/2406.16858), and [the paper of EAGLE-3](https://arxiv.org/pdf/2503.01840).
```
@inproceedings{li2024eagle,
author = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang},
title = {{EAGLE}: Speculative Sampling Requires Rethinking Feature Uncertainty},
booktitle = {International Conference on Machine Learning},
year = {2024}
}
@inproceedings{li2024eagle2,
author = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang},
title = {{EAGLE-2}: Faster Inference of Language Models with Dynamic Draft Trees},
booktitle = {Empirical Methods in Natural Language Processing},
year = {2024}
}
@inproceedings{li2025eagle3,
author = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang},
title = {{EAGLE-3}: Scaling up Inference Acceleration of Large Language Models via Training-Time Test},
booktitle = {Annual Conference on Neural Information Processing Systems},
year = {2025}
}
```
|
yuhuili/EAGLE-LLaMA3.1-Instruct-8B
|
yuhuili
| 2025-09-19T12:12:30Z | 122,386 | 1 | null |
[
"pytorch",
"llama",
"arxiv:2401.15077",
"arxiv:2406.16858",
"arxiv:2503.01840",
"license:apache-2.0",
"region:us"
] | null | 2025-03-10T16:26:21Z |
---
license: apache-2.0
---
<img src="figs/logo.png" alt="EAGLE" width="220" align="left"><div align="center"><h1> EAGLE</h1></div>
<p align="center">
| <a href="https://arxiv.org/pdf/2401.15077.pdf"><b>EAGLE</b></a> |
<a href="https://arxiv.org/pdf/2406.16858"><b>EAGLE-2</b></a> |
<a href="https://arxiv.org/pdf/2503.01840"><b>EAGLE-3</b></a> |
<a href="https://sites.google.com/view/
eagle-llm"><b>Blog</b></a> |
</p>
<p align="center">
<a href="">
<img src="https://img.shields.io/badge/Version-v3.0.0-orange.svg" alt="Version">
</a>
<a href="https://opensource.org/licenses/Apache-2.0">
<img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" alt="License">
</a>
<a href="https://github.com/SafeAILab/EAGLE/issues">
<img src="https://img.shields.io/badge/Maintained%3F-yes-green.svg" alt="Maintenance">
</a>
<a href="https://github.com/SafeAILab/EAGLE/pulls">
<img src="https://img.shields.io/badge/Contributions-welcome-brightgreen.svg?style=flat" alt="Contributions welcome">
</a>
</p>
##
<p align="center">
<img src="./figs/eagle3r.jpg" alt="benchmark" width="790">
</p>
EAGLE (Extrapolation Algorithm for Greater Language-model Efficiency) is a new baseline for fast decoding of Large Language Models (LLMs) with provable performance maintenance. This approach involves extrapolating the second-top-layer contextual feature vectors of LLMs, enabling a significant boost in generation efficiency.
- EAGLE is:
- certified by the <a href="https://github.com/hemingkx/Spec-Bench/blob/main/Leaderboard.md"><b>third-party</b></a> evaluation as the **fastest** speculative method so far.
- achieving **2x** speedup on <a href="https://github.com/pytorch-labs/gpt-fast"><b>gpt-fast</b></a>.
- **3x** faster than vanilla decoding (13B).
- **2x** faster than <a href="https://lmsys.org/blog/2023-11-21-lookahead-decoding/"><b>Lookahead</b></a> (13B).
- **1.6x** faster than <a href="https://sites.google.com/view/medusa-llm"><b>Medusa</b></a> (13B).
- provably maintaining the consistency with vanilla decoding in the distribution of generated texts.
- trainable (within 1-2 days) and testable on 8x RTX 3090 GPUs. So even the GPU poor can afford it.
- combinable with other parallelled techniques such as vLLM, DeepSpeed, Mamba, FlashAttention, quantization, and hardware optimization.
EAGLE-2 uses the confidence scores from the draft model to approximate acceptance rates, dynamically adjusting the draft tree structure, which further enhances performance.
- EAGLE-2 is:
- **4x** faster than vanilla decoding (13B).
- **1.4x** faster than EAGLE-1 (13B).
EAGLE-3 removes the feature prediction constraint in EAGLE and simulates this process during training using training-time testing. Considering that top-layer features are limited to next-token prediction, EAGLE-3 replaces them with a fusion of low-, mid-, and high-level semantic features.
EAGLE-3 further improves generation speed while ensuring lossless performance.
- EAGLE-3 is:
- **5.6** faster than vanilla decoding (13B).
- **1.8x** faster than EAGLE-1 (13B).
<p align="center">
<img src="./figs/e3.gif" alt="demogif" width="600">
</p>
_Inference is conducted on 2x RTX 3090 GPUs at fp16 precision using the Vicuna 13B model._
[//]: # ()
[//]: # ()
[//]: # (Using EAGLE-2, the inference speed on 2 RTX 3060 GPUs can be faster than vanilla autoregressive decoding on an A100 GPU.)
## Support
EAGLE has been merged in the following mainstream LLM serving frameworks (listed in alphabetical order).
- <a href="https://rocm.docs.amd.com/en/latest/">AMD ROCm</a>
- <a href="https://angelslim.readthedocs.io/zh-cn/latest/features/speculative_decoding/eagle.html">AngelSlim</a>
- <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/developer_guides/feature-guide.html#eagle-speculative-decoding">AWS NeuronX Distributed Core</a>
- <a href="https://github.com/OpenBMB/CPM.cu">CPM.cu</a>
- <a href="https://github.com/intel/intel-extension-for-transformers/pull/1504">Intel® Extension for Transformers</a>
- <a href="https://github.com/intel-analytics/ipex-llm/pull/11104">Intel® LLM Library for PyTorch</a>
- <a href="https://llm.mlc.ai/docs/deploy/rest.html">MLC-LLM</a>
- <a href="https://docs.nvidia.com/nemo-framework/user-guide/latest/model-optimization/speculative/speculative.html">NVIDIA NeMo Framework</a>
- <a href="https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/eagle">NVIDIA TensorRT-LLM</a>
- <a href="https://nvidia.github.io/TensorRT-Model-Optimizer/guides/7_speculative_decoding.html">NVIDIA TensorRT Model Optimizer</a>
- <a href="https://paddlenlp.readthedocs.io/en/latest/llm/docs/predict/speculative_decoding.html">PaddleNLP</a>
- <a href="https://docs.sglang.ai/advanced_features/speculative_decoding.html">SGLang</a>
- <a href="https://github.com/sgl-project/SpecForge">SpecForge</a>
- <a href="https://github.com/vllm-project/vllm/pull/16937">vLLM</a>
## Reference
For technical details and full experimental results, please check [the paper of EAGLE](https://arxiv.org/pdf/2401.15077.pdf), [the paper of EAGLE-2](https://arxiv.org/pdf/2406.16858), and [the paper of EAGLE-3](https://arxiv.org/pdf/2503.01840).
```
@inproceedings{li2024eagle,
author = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang},
title = {{EAGLE}: Speculative Sampling Requires Rethinking Feature Uncertainty},
booktitle = {International Conference on Machine Learning},
year = {2024}
}
@inproceedings{li2024eagle2,
author = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang},
title = {{EAGLE-2}: Faster Inference of Language Models with Dynamic Draft Trees},
booktitle = {Empirical Methods in Natural Language Processing},
year = {2024}
}
@inproceedings{li2025eagle3,
author = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang},
title = {{EAGLE-3}: Scaling up Inference Acceleration of Large Language Models via Training-Time Test},
booktitle = {Annual Conference on Neural Information Processing Systems},
year = {2025}
}
```
|
mradermacher/BeDLM-1B-GGUF
|
mradermacher
| 2025-09-19T12:12:30Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:yhytoto12/BeDLM-1B",
"base_model:quantized:yhytoto12/BeDLM-1B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T12:06:28Z |
---
base_model: yhytoto12/BeDLM-1B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/yhytoto12/BeDLM-1B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#BeDLM-1B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BeDLM-1B-GGUF/resolve/main/BeDLM-1B.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/BeDLM-1B-GGUF/resolve/main/BeDLM-1B.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/BeDLM-1B-GGUF/resolve/main/BeDLM-1B.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BeDLM-1B-GGUF/resolve/main/BeDLM-1B.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/BeDLM-1B-GGUF/resolve/main/BeDLM-1B.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/BeDLM-1B-GGUF/resolve/main/BeDLM-1B.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BeDLM-1B-GGUF/resolve/main/BeDLM-1B.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BeDLM-1B-GGUF/resolve/main/BeDLM-1B.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/BeDLM-1B-GGUF/resolve/main/BeDLM-1B.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/BeDLM-1B-GGUF/resolve/main/BeDLM-1B.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BeDLM-1B-GGUF/resolve/main/BeDLM-1B.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/BeDLM-1B-GGUF/resolve/main/BeDLM-1B.f16.gguf) | f16 | 2.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
treasure4l/financial-advisory-gpt-oss-20b
|
treasure4l
| 2025-09-19T12:09:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"dataset:treasure4l/nigerian-financial-qa-reasoning",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T12:04:16Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
datasets:
- treasure4l/nigerian-financial-qa-reasoning
---
# Uploaded model
- **Developed by:** treasure4l
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luckeciano/Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_8085
|
luckeciano
| 2025-09-19T12:07:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T08:07:42Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_8085
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_8085
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_8085", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/bg7ncs8p)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/mn-12b-the-yapper-GGUF
|
mradermacher
| 2025-09-19T12:05:33Z | 1,039 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Burnt-Toast/mn-12b-the-yapper",
"base_model:quantized:Burnt-Toast/mn-12b-the-yapper",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-18T04:21:35Z |
---
base_model: Burnt-Toast/mn-12b-the-yapper
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Burnt-Toast/mn-12b-the-yapper
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#mn-12b-the-yapper-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/mn-12b-the-yapper-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mn-12b-the-yapper-GGUF/resolve/main/mn-12b-the-yapper.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/mn-12b-the-yapper-GGUF/resolve/main/mn-12b-the-yapper.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/mn-12b-the-yapper-GGUF/resolve/main/mn-12b-the-yapper.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mn-12b-the-yapper-GGUF/resolve/main/mn-12b-the-yapper.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/mn-12b-the-yapper-GGUF/resolve/main/mn-12b-the-yapper.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/mn-12b-the-yapper-GGUF/resolve/main/mn-12b-the-yapper.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mn-12b-the-yapper-GGUF/resolve/main/mn-12b-the-yapper.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mn-12b-the-yapper-GGUF/resolve/main/mn-12b-the-yapper.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/mn-12b-the-yapper-GGUF/resolve/main/mn-12b-the-yapper.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/mn-12b-the-yapper-GGUF/resolve/main/mn-12b-the-yapper.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mn-12b-the-yapper-GGUF/resolve/main/mn-12b-the-yapper.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/DeepEyes-rebuttal-model-GGUF
|
mradermacher
| 2025-09-19T11:50:14Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ChenShawn/DeepEyes-rebuttal-model",
"base_model:quantized:ChenShawn/DeepEyes-rebuttal-model",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-19T11:37:55Z |
---
base_model: ChenShawn/DeepEyes-rebuttal-model
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/ChenShawn/DeepEyes-rebuttal-model
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DeepEyes-rebuttal-model-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepEyes-rebuttal-model-GGUF/resolve/main/DeepEyes-rebuttal-model.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 1.0 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/DeepEyes-rebuttal-model-GGUF/resolve/main/DeepEyes-rebuttal-model.mmproj-f16.gguf) | mmproj-f16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/DeepEyes-rebuttal-model-GGUF/resolve/main/DeepEyes-rebuttal-model.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepEyes-rebuttal-model-GGUF/resolve/main/DeepEyes-rebuttal-model.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepEyes-rebuttal-model-GGUF/resolve/main/DeepEyes-rebuttal-model.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepEyes-rebuttal-model-GGUF/resolve/main/DeepEyes-rebuttal-model.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/DeepEyes-rebuttal-model-GGUF/resolve/main/DeepEyes-rebuttal-model.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepEyes-rebuttal-model-GGUF/resolve/main/DeepEyes-rebuttal-model.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepEyes-rebuttal-model-GGUF/resolve/main/DeepEyes-rebuttal-model.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepEyes-rebuttal-model-GGUF/resolve/main/DeepEyes-rebuttal-model.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepEyes-rebuttal-model-GGUF/resolve/main/DeepEyes-rebuttal-model.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/DeepEyes-rebuttal-model-GGUF/resolve/main/DeepEyes-rebuttal-model.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepEyes-rebuttal-model-GGUF/resolve/main/DeepEyes-rebuttal-model.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepEyes-rebuttal-model-GGUF/resolve/main/DeepEyes-rebuttal-model.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ZJkyle/qwen3-policechat
|
ZJkyle
| 2025-09-19T11:38:14Z | 103 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-16T10:36:50Z |
# Qwen3-4B Police Chat Classification Model
這是一個基於 Qwen3-4B 微調的警察聊天文本分類模型,專門用於將警察相關的聊天內容分類到 41 個不同的類別中。
## 模型資訊
- **Base Model**: Qwen/Qwen3-4B
- **Task**: 文本分類 (Text Classification)
- **Classes**: 41 個警察相關類別 (A-AO)
- **Format**: GGUF (GGML Universal Format)
- **Quantization**: Q4_K_M (約 2.4GB)
## 分類類別
模型可以將文本分類到以下 41 個類別:
A=投訴員警, B=申請交通初判表, C=良民證, D=查詢案件, E=找人, F=查詢計程車營業證, G=家暴相關, H=失蹤人口, I=局長信箱, J=監視器, K=防空洞, L=婚喪喜慶路權申請, M=保全業務, N=報案, O=詐騙, P=申請國賠, Q=檢舉警察, R=申請大貨車臨時證, S=警示帳戶相關, T=檢舉攤販, U=遺失物/竊盜, V=找局長, W=申請入山, X=警察公墓, Y=署長信箱, Z=署長臉書NPA, AA=交通相關問題, AB=槍砲彈藥刀械, AC=史蹟館, AD=陳情, AE=拍賣相關, AF=法律問題, AG=噪音擾民, AH=問地址, AI=守望相助, AJ=傳真號碼, AK=車禍, AL=違停, AM=交通罰單, AN=查詢當鋪申請, AO=一般詢問
## 使用方法
### 使用 llama.cpp
```bash
# 下載模型
wget https://huggingface.co/ZJkyle/qwen3-4b-policechat/resolve/main/model-f16-Q4_K_M.gguf
# 使用 llama.cpp 進行推理
./llama-cli -m model-f16-Q4_K_M.gguf -p "你是一個精準的文本分類助手。
指令: 請根據以下選項,選擇最適合的分類代碼。只輸出代碼字母。
選項:
A=投訴員警, B=申請交通初判表, C=良民證, D=查詢案件, E=找人, F=查詢計程車營業證, G=家暴相關, H=失蹤人口, I=局長信箱, J=監視器, K=防空洞, L=婚喪喜慶路權申請, M=保全業務, N=報案, O=詐騙, P=申請國賠, Q=檢舉警察, R=申請大貨車臨時證, S=警示帳戶相關, T=檢舉攤販, U=遺失物/竊盜, V=找局長, W=申請入山, X=警察公墓, Y=署長信箱, Z=署長臉書NPA, AA=交通相關問題, AB=槍砲彈藥刀械, AC=史蹟館, AD=陳情, AE=拍賣相關, AF=法律問題, AG=噪音擾民, AH=問地址, AI=守望相助, AJ=傳真號碼, AK=車禍, AL=違停, AM=交通罰單, AN=查詢當鋪申請, AO=一般詢問
內容:
[你的文本內容]"
```
### 使用 Python (transformers)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# 載入模型 (需要先將 GGUF 轉換回 Hugging Face 格式)
model_name = "ZJkyle/qwen3-4b-policechat"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# 準備輸入
prompt = """你是一個精準的文本分類助手。
指令: 請根據以下選項,選擇最適合的分類代碼。只輸出代碼字母。
選項:
A=投訴員警, B=申請交通初判表, C=良民證, D=查詢案件, E=找人, F=查詢計程車營業證, G=家暴相關, H=失蹤人口, I=局長信箱, J=監視器, K=防空洞, L=婚喪喜慶路權申請, M=保全業務, N=報案, O=詐騙, P=申請國賠, Q=檢舉警察, R=申請大貨車臨時證, S=警示帳戶相關, T=檢舉攤販, U=遺失物/竊盜, V=找局長, W=申請入山, X=警察公墓, Y=署長信箱, Z=署長臉書NPA, AA=交通相關問題, AB=槍砲彈藥刀械, AC=史蹟館, AD=陳情, AE=拍賣相關, AF=法律問題, AG=噪音擾民, AH=問地址, AI=守望相助, AJ=傳真號碼, AK=車禍, AL=違停, AM=交通罰單, AN=查詢當鋪申請, AO=一般詢問
內容:
[你的文本內容]"""
# 進行推理
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=1, temperature=0.0)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## 訓練資訊
- **訓練數據**: 13,235 個樣本
- **訓練集**: 10,588 個樣本
- **驗證集**: 2,647 個樣本
- **訓練週期**: 5 epochs
- **學習率**: 3.0e-4
- **LoRA 配置**: r=32, alpha=64
- **最終訓練 Loss**: 0.1045
- **最終驗證 Loss**: 0.2162
## 注意事項
1. 模型使用 LoRA 微調,主要針對警察聊天文本分類任務
2. 建議使用貪婪解碼 (temperature=0.0) 以獲得一致的分類結果
3. 模型輸出為單一字母代碼,對應上述 41 個類別
4. 如需更高精度,可以使用 f16 版本 (7.5GB)
## 檔案說明
- `model-f16-Q4_K_M.gguf`: 量化版本 (2.4GB,推薦使用)
- `model-f16.gguf`: 完整精度版本 (7.5GB,如需更高精度)
## 授權
本模型基於 Qwen3-4B 進行微調,請遵循相應的授權條款。
|
mradermacher/DevilsAdvocate-1B-GGUF
|
mradermacher
| 2025-09-19T11:38:01Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"lora",
"sft",
"trl",
"unsloth",
"fine-tuned",
"en",
"dataset:theprint/Advocate-9.4k",
"base_model:theprint/DevilsAdvocate-1B",
"base_model:adapter:theprint/DevilsAdvocate-1B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-19T11:26:40Z |
---
base_model: theprint/DevilsAdvocate-1B
datasets:
- theprint/Advocate-9.4k
language: en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- lora
- sft
- transformers
- trl
- unsloth
- fine-tuned
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/theprint/DevilsAdvocate-1B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DevilsAdvocate-1B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DevilsAdvocate-1B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DevilsAdvocate-1B-GGUF/resolve/main/DevilsAdvocate-1B.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/DevilsAdvocate-1B-GGUF/resolve/main/DevilsAdvocate-1B.Q2_K.gguf) | Q2_K | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/DevilsAdvocate-1B-GGUF/resolve/main/DevilsAdvocate-1B.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/DevilsAdvocate-1B-GGUF/resolve/main/DevilsAdvocate-1B.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DevilsAdvocate-1B-GGUF/resolve/main/DevilsAdvocate-1B.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/DevilsAdvocate-1B-GGUF/resolve/main/DevilsAdvocate-1B.Q4_K_S.gguf) | Q4_K_S | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DevilsAdvocate-1B-GGUF/resolve/main/DevilsAdvocate-1B.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DevilsAdvocate-1B-GGUF/resolve/main/DevilsAdvocate-1B.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/DevilsAdvocate-1B-GGUF/resolve/main/DevilsAdvocate-1B.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/DevilsAdvocate-1B-GGUF/resolve/main/DevilsAdvocate-1B.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DevilsAdvocate-1B-GGUF/resolve/main/DevilsAdvocate-1B.Q8_0.gguf) | Q8_0 | 1.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DevilsAdvocate-1B-GGUF/resolve/main/DevilsAdvocate-1B.f16.gguf) | f16 | 2.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ellisdoro/afpo-all-MiniLM-L6-v2_cross_attention_gat_h1024_o128_cross_entropy_e128_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T11:36:44Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-cross_attention",
"gnn-gat",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T11:36:41Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-cross_attention
- gnn-gat
- small-ontology
---
# afpo_all-MiniLM-L6-v2_cross_attention_gat_h1024_o128_cross_entropy_e128_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: afpo.owl
- **Domain**: general
- **Ontology Concepts**: 473
- **Concept Alignment**: 473/473 (100.0%)
- **Fusion Method**: cross_attention
- **GNN Architecture**: GAT
- **Structural Embedding Dimension**: 473
- **Output Embedding Dimension**: 128
- **Hidden Dimensions**: 1024
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 1.3 MB
- **Model Size**: 94.8 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 1024 hidden → 128 output
- Structure: 473 concepts → GNN → 128 output
- Fusion: cross_attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('afpo_all-MiniLM-L6-v2_cross_attention_gat_h1024_o128_cross_entropy_e128_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
mradermacher/Magrathic-12B-GGUF
|
mradermacher
| 2025-09-19T11:25:51Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:grimjim/Magrathic-12B",
"base_model:quantized:grimjim/Magrathic-12B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T08:25:05Z |
---
base_model: grimjim/Magrathic-12B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/grimjim/Magrathic-12B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Magrathic-12B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Magrathic-12B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Magrathic-12B-GGUF/resolve/main/Magrathic-12B.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Magrathic-12B-GGUF/resolve/main/Magrathic-12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Magrathic-12B-GGUF/resolve/main/Magrathic-12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Magrathic-12B-GGUF/resolve/main/Magrathic-12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Magrathic-12B-GGUF/resolve/main/Magrathic-12B.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Magrathic-12B-GGUF/resolve/main/Magrathic-12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Magrathic-12B-GGUF/resolve/main/Magrathic-12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Magrathic-12B-GGUF/resolve/main/Magrathic-12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Magrathic-12B-GGUF/resolve/main/Magrathic-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Magrathic-12B-GGUF/resolve/main/Magrathic-12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Magrathic-12B-GGUF/resolve/main/Magrathic-12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
aamijar/Llama-2-7b-hf-lora-r2-rte-epochs1
|
aamijar
| 2025-09-19T11:25:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T11:25:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ellisdoro/EDAM-all-MiniLM-L6-v2_cross_attention_rgcn_h512_o128_cross_entropy_e128_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T11:19:11Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"biomedical",
"biomedical-ontology",
"fusion-cross_attention",
"gnn-rgcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T11:19:03Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- biomedical
- biomedical-ontology
- fusion-cross_attention
- gnn-rgcn
- medium-ontology
---
# EDAM_all-MiniLM-L6-v2_cross_attention_rgcn_h512_o128_cross_entropy_e128_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: EDAM.owl
- **Domain**: biomedical
- **Ontology Concepts**: 3,511
- **Concept Alignment**: 3,511/3,511 (100.0%)
- **Fusion Method**: cross_attention
- **GNN Architecture**: RGCN
- **Structural Embedding Dimension**: 3511
- **Output Embedding Dimension**: 128
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 3.2 MB
- **Model Size**: 119.5 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 128 output
- Structure: 3511 concepts → GNN → 128 output
- Fusion: cross_attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('EDAM_all-MiniLM-L6-v2_cross_attention_rgcn_h512_o128_cross_entropy_e128_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- Biomedical domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
qownscks/banana_doll
|
qownscks
| 2025-09-19T11:05:55Z | 16 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:qownscks/banana_doll",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-17T21:32:56Z |
---
base_model: lerobot/smolvla_base
datasets: qownscks/banana_doll
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- lerobot
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
ellisdoro/disdriv-all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e128_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T11:01:53Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-cross_attention",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T11:01:50Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-cross_attention
- gnn-gcn
- small-ontology
---
# disdriv_all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e128_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: disdriv.owl
- **Domain**: general
- **Ontology Concepts**: 18
- **Concept Alignment**: 18/18 (100.0%)
- **Fusion Method**: cross_attention
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 18
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.0 MB
- **Model Size**: 91.1 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 18 concepts → GNN → 64 output
- Fusion: cross_attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('disdriv_all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e128_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/dideo-all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T11:01:37Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-additive",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T11:01:34Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-additive
- gnn-gcn
- small-ontology
---
# dideo_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: dideo.owl
- **Domain**: general
- **Ontology Concepts**: 416
- **Concept Alignment**: 416/416 (100.0%)
- **Fusion Method**: additive
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 416
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.9 MB
- **Model Size**: 90.8 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 416 concepts → GNN → 64 output
- Fusion: additive → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('dideo_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/cteno-all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e512_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T10:59:12Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-additive",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T10:59:10Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-additive
- gnn-gcn
- small-ontology
---
# cteno_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e512_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: cteno.owl
- **Domain**: general
- **Ontology Concepts**: 172
- **Concept Alignment**: 172/172 (100.0%)
- **Fusion Method**: additive
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 172
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.3 MB
- **Model Size**: 88.9 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 172 concepts → GNN → 64 output
- Fusion: additive → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('cteno_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e512_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/cteno-all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e128_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T10:58:54Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-additive",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T10:58:51Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-additive
- gnn-gcn
- small-ontology
---
# cteno_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e128_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: cteno.owl
- **Domain**: general
- **Ontology Concepts**: 172
- **Concept Alignment**: 172/172 (100.0%)
- **Fusion Method**: additive
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 172
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.3 MB
- **Model Size**: 88.9 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 172 concepts → GNN → 64 output
- Fusion: additive → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('cteno_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e128_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/cob-all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e128_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T10:56:54Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-additive",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T10:56:52Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-additive
- gnn-gcn
- small-ontology
---
# cob_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e128_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: cob.owl
- **Domain**: general
- **Ontology Concepts**: 68
- **Concept Alignment**: 68/68 (100.0%)
- **Fusion Method**: additive
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 68
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.1 MB
- **Model Size**: 88.1 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 68 concepts → GNN → 64 output
- Fusion: additive → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('cob_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e128_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
winnieyangwannan/popqa_gpt-oss-20b_experts-down_pnas_layer_14_12_all_37_0.0001_6400_3
|
winnieyangwannan
| 2025-09-19T10:54:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T10:51:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kimono998/wordle-exp-pos-3-lora-adapter-iter-25
|
kimono998
| 2025-09-19T10:50:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T10:49:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
selsar/social_roles
|
selsar
| 2025-09-19T10:45:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-19T10:44:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kimono998/wordle-exp-gen-sb-lora-adapter-iter-25
|
kimono998
| 2025-09-19T10:43:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T10:41:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758278299
|
schooncestiaa
| 2025-09-19T10:39:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-19T10:39:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jesse-dfp/Qwen3-0.6B-Gensyn-Swarm-large_elusive_clam
|
jesse-dfp
| 2025-09-19T10:39:19Z | 73 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am large_elusive_clam",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T13:51:19Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am large_elusive_clam
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AlgoRythm124/supernova-25uslmil
|
AlgoRythm124
| 2025-09-19T10:36:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-14T12:05:12Z |
# 🌟 Supernova USLM - Ultra-Small Language Model
A 25 million parameter language model designed for efficient conversational AI with minimal computational requirements.
## 🚀 Features
- **Ultra-Compact**: Only 25M parameters while maintaining conversational quality
- **Modern Architecture**:
- Grouped Query Attention (GQA) for efficiency
- Rotary Position Embeddings (RoPE)
- SwiGLU activation functions
- Sliding window attention
- RMS normalization
- **Conversation Fine-tuning**: Specialized training for chat interactions
- **Efficient Generation**: Advanced sampling strategies and stopping criteria
- **Easy to Use**: Simple chat interface and training pipeline
## 📁 Project Structure
```
USLM/
├── supernova_model.py # Core model architecture
├── tokenizer.py # Tokenization and text preprocessing
├── chat_interface.py # Interactive chat interface
├── web_ui.py # Modern web UI (DeepSeek-inspired)
├── training.py # Training pipeline for conversation fine-tuning
├── demo.py # Comprehensive demo script
├── run_webui.py # Web UI launcher script
├── run_webui.bat # Windows launcher for web UI
├── safety_config.py # Safety checks and company responses
├── web_search.py # Web search integration
├── requirements.txt # Python dependencies
└── README.md # This file
```
## ⚡ Quick Start
### 1. Install Dependencies
```bash
pip install -r requirements.txt
```
### 2. Web UI (Recommended) 🌐
**The easiest way to use Supernova:**
```bash
# Launch the modern web interface
python run_webui.py
# Or on Windows:
run_webui.bat
```
This opens a beautiful DeepSeek-inspired web interface at http://localhost:8501 with:
- 🎨 Modern dark theme with gradient styling
- 💬 Real-time chat interface
- ⚙️ Adjustable generation settings
- 📊 Model information and system status
- 💾 Conversation saving/loading
- 🔧 Easy model switching
### 3. Run the Demo
```bash
# Full demo (recommended for first time)
python demo.py
# Quick demo (faster)
python demo.py --quick
# Only chat interface
python demo.py --mode chat
# Only training demo
python demo.py --mode train
```
### 4. Interactive Chat (CLI)
```bash
# Direct CLI chat interface
python chat_interface.py
```
## 🛠️ Installation
### Requirements
- Python 3.8+
- PyTorch 2.0+
- CUDA (optional, for GPU acceleration)
### Install from Requirements
```bash
pip install torch>=2.0.0 transformers>=4.30.0 datasets>=2.10.0 accelerate>=0.20.0 tokenizers>=0.13.0 numpy>=1.24.0 tqdm>=4.65.0 tensorboard>=2.13.0 scikit-learn>=1.2.0 sentencepiece>=0.1.99 wandb>=0.15.0 einops>=0.6.0 bitsandbytes>=0.40.0 peft>=0.4.0
```
## 📚 Usage Examples
### Basic Model Usage
```python
from supernova_model import create_supernova_model
from tokenizer import SupernovaTokenizer
# Create model and tokenizer
model = create_supernova_model()
tokenizer = SupernovaTokenizer()
# Basic inference
text = "Hello, how are you?"
input_ids = tokenizer.encode(text)
# ... (see demo.py for complete example)
```
### Chat Interface
```python
from chat_interface import SupernovaChat
# Initialize chat
chat = SupernovaChat()
# Single interaction
response = chat.chat("What is machine learning?")
print(response)
# Interactive mode
chat.interactive_chat()
```
### Web UI Usage
```python
# Web UI runs automatically - just launch with:
# python run_webui.py
# Programmatic access to web UI components:
from web_ui import SupernovaWebUI
ui = SupernovaWebUI()
ui.run() # Starts the Streamlit app
```
### Training
```python
from training import SupernovaTrainer, TrainingConfig
# Setup training configuration
config = TrainingConfig(
batch_size=4,
learning_rate=3e-5,
max_epochs=3,
output_dir="outputs"
)
# Train the model
trainer = SupernovaTrainer(config)
trainer.train()
```
## 🎯 Model Architecture
**Supernova USLM** uses a transformer decoder architecture optimized for efficiency:
- **Parameters**: 25M total
- **Layers**: 8 transformer blocks
- **Hidden Size**: 768
- **Attention Heads**: 12 (with 4 key-value heads for GQA)
- **Vocabulary**: 32K tokens
- **Context Length**: 2048 tokens
- **Sliding Window**: 512 tokens for long sequences
### Key Innovations
1. **Grouped Query Attention**: Reduces memory usage by sharing key-value heads
2. **Partial Rotary Embeddings**: Only 50% of dimensions use RoPE for efficiency
3. **SwiGLU Activation**: More efficient than standard ReLU/GELU
4. **Sliding Window Attention**: Handles longer contexts efficiently
5. **Conversation-Specific Training**: Loss masking for chat fine-tuning
## 🚂 Training
### Data Format
The model expects conversation data in this format:
```json
{
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is AI?"},
{"role": "assistant", "content": "AI is artificial intelligence..."}
]
}
```
### Training Configuration
```python
config = TrainingConfig(
model_name="supernova-chat",
batch_size=4,
gradient_accumulation_steps=4,
learning_rate=3e-5,
max_epochs=3,
max_sequence_length=1024,
mixed_precision=True,
mask_user_tokens=True # Only train on assistant responses
)
```
### Training Pipeline
1. **Data Preparation**: Converts conversations to training format
2. **Loss Masking**: Only trains on assistant responses
3. **Mixed Precision**: Faster training with FP16
4. **Gradient Accumulation**: Effective larger batch sizes
5. **Cosine Annealing**: Learning rate scheduling
## 🌐 Web UI Features
The Supernova Web UI provides a modern, user-friendly interface:
### 🎨 Design Features
- **DeepSeek-inspired theme**: Dark mode with beautiful gradients
- **Responsive layout**: Works on desktop, tablet, and mobile
- **Real-time chat**: Instant message display with typing indicators
- **Smooth animations**: Hover effects and transitions
### ⚙️ Functionality
- **Model status**: Live model information and GPU memory usage
- **Generation settings**: Adjustable temperature, top-k, top-p, max tokens
- **Quick actions**: Pre-built prompts for common questions
- **Conversation management**: Save/load chat history
- **Safety integration**: Built-in content filtering
- **Web search**: Live search integration (when API key provided)
### 🚀 Quick Actions
- "Who are you?" - Learn about Supernova
- "Tell me about AlgoRythm" - Company information
- "Explain AI" - AI education
- "How do you work?" - Technical details
### 📱 Mobile Friendly
The web UI automatically adapts to different screen sizes for optimal mobile experience.
## 💬 Chat Features
### Special Commands
- `reset` - Clear conversation history
- `system <prompt>` - Change system prompt
- `save <filename>` - Save conversation
- `load <filename>` - Load conversation
- `quit` / `exit` / `bye` - End chat
### Generation Parameters
```python
response = chat.generate_response(
user_input="Hello!",
temperature=0.7, # Randomness
top_k=40, # Top-k filtering
top_p=0.9, # Nucleus filtering
repetition_penalty=1.1, # Reduce repetition
max_new_tokens=256 # Response length
)
```
## ⚙️ Configuration
### Model Configuration
```python
from supernova_model import SupernovaConfig
config = SupernovaConfig(
vocab_size=32000,
hidden_size=768,
num_hidden_layers=8,
num_attention_heads=12,
num_key_value_heads=4,
intermediate_size=2048,
max_position_embeddings=2048,
use_sliding_window=True,
sliding_window_size=512
)
```
### Generation Configuration
```python
from chat_interface import GenerationConfig
gen_config = GenerationConfig(
temperature=0.7,
top_k=40,
top_p=0.9,
repetition_penalty=1.1,
max_new_tokens=256,
do_sample=True
)
```
## 🔍 Model Performance
### Specifications
- **Parameters**: 25,165,824 (25.2M)
- **Model Size**: ~100MB (FP32), ~50MB (FP16)
- **Memory Usage**: ~1GB GPU memory for inference
- **Speed**: 20-50 tokens/sec on modern GPUs
- **Context**: 2048 tokens with sliding window support
### Efficiency Optimizations
1. **GQA**: 3x reduction in KV cache size
2. **Partial RoPE**: 2x faster position encoding
3. **SwiGLU**: 1.5x faster than standard FFN
4. **Mixed Precision**: 2x faster training, 50% memory reduction
5. **Sliding Window**: Constant memory for long sequences
## 🧪 Testing
### Run Tests
```bash
# Basic functionality test
python demo.py --mode test
# Training test
python demo.py --mode train --quick
# Chat test
python demo.py --mode chat
```
### Performance Benchmarks
```python
# Benchmark generation speed
from chat_interface import SupernovaChat
import time
chat = SupernovaChat()
start = time.time()
response = chat.chat("Write a short story about AI.")
end = time.time()
print(f"Generated in {end-start:.2f}s")
```
## 🚀 Deployment
### CPU Deployment
```python
chat = SupernovaChat(device="cpu")
```
### GPU Deployment
```python
chat = SupernovaChat(device="cuda") # or device="auto"
```
### Model Saving/Loading
```python
# Save trained model
trainer.save_model("my_model")
# Load trained model
chat = SupernovaChat(model_path="my_model")
```
## 📊 Monitoring
### Training Logs
Training automatically logs to:
- Console output
- `outputs/training.log`
- TensorBoard (if enabled)
- Weights & Biases (if configured)
### Model Checkpoints
- `best_model/` - Best validation loss
- `final_model/` - Final training state
- `checkpoint-{step}/` - Regular checkpoints
## 🤝 Contributing
1. Fork the repository
2. Create a feature branch
3. Make changes with tests
4. Submit a pull request
### Development Setup
```bash
git clone <repository>
cd USLM
pip install -r requirements.txt
python demo.py --mode test
```
## 📄 License
This project is open source. See LICENSE file for details.
## 🎯 Future Improvements
- [ ] Support for additional tokenizers (SentencePiece, etc.)
- [ ] Quantization support (4-bit, 8-bit)
- [ ] ONNX export for deployment
- [ ] Fine-tuning on larger conversation datasets
- [ ] Multi-modal capabilities
- [ ] Streaming generation API
- [ ] Docker containerization
## 🏆 Acknowledgments
- Transformer architecture from "Attention Is All You Need"
- RoPE from "RoFormer: Enhanced Transformer with Rotary Position Embedding"
- SwiGLU from "GLU Variants Improve Transformer"
- GQA from "GQA: Training Generalized Multi-Query Transformer"
## 📞 Support
For questions, issues, or contributions:
1. Check existing GitHub issues
2. Create a new issue with detailed description
3. Join discussions in GitHub Discussions
---
**Happy chatting with Supernova USLM! 🌟**
|
JheiKrauzer/blockassist
|
JheiKrauzer
| 2025-09-19T10:32:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged nimble bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-19T10:32:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged nimble bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758277678
|
schooncestiaa
| 2025-09-19T10:29:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-19T10:29:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eiknarf/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-rapid_stocky_stork
|
eiknarf
| 2025-09-19T10:22:19Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am rapid stocky stork",
"unsloth",
"trl",
"genrl-swarm",
"I am rapid_stocky_stork",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-10T17:36:29Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-rapid_stocky_stork
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am rapid stocky stork
- unsloth
- trl
- genrl-swarm
- I am rapid_stocky_stork
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-rapid_stocky_stork
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="eiknarf/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-rapid_stocky_stork", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.