modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-22 00:45:16
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 570
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-22 00:43:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
rmtlabs/s-ai-Qwen-Qwen2.5-1.5B-Instruct-adapter
|
rmtlabs
| 2025-09-19T10:01:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"region:us"
] |
text-generation
| 2025-09-19T10:01:04Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
dibahme/swissai-tokenizer-2-boc-eoc
|
dibahme
| 2025-09-19T09:55:29Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T09:55:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ifmylove2011/girlslike-qweni
|
ifmylove2011
| 2025-09-19T09:53:10Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-09-19T08:46:36Z |
---
license: mit
---
用于测试基于qwen image训练的人物lora。
fp8原模型,8步加速lora,中文提示词,shift=2.0,cfg=2.0,steps=10。
# GirlsLike-Qweni 图片展示
### girlslikeqweni_lyf

|
aamijar/MaskLLM-Llama-2-7b-hf-lora-r8-mrpc-epochs3
|
aamijar
| 2025-09-19T09:47:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T09:47:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Kenko-mental-health-llama-3-model-GGUF
|
mradermacher
| 2025-09-19T09:44:10Z | 2,774 | 0 |
transformers
|
[
"transformers",
"gguf",
"medical",
"en",
"base_model:IniNLP247/Kenko-mental-health-llama-3-model",
"base_model:quantized:IniNLP247/Kenko-mental-health-llama-3-model",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-11T04:39:27Z |
---
base_model: IniNLP247/Kenko-mental-health-llama-3-model
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- medical
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/IniNLP247/Kenko-mental-health-llama-3-model
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Kenko-mental-health-llama-3-model-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kenko-mental-health-llama-3-model-GGUF/resolve/main/Kenko-mental-health-llama-3-model.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kenko-mental-health-llama-3-model-GGUF/resolve/main/Kenko-mental-health-llama-3-model.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kenko-mental-health-llama-3-model-GGUF/resolve/main/Kenko-mental-health-llama-3-model.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kenko-mental-health-llama-3-model-GGUF/resolve/main/Kenko-mental-health-llama-3-model.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kenko-mental-health-llama-3-model-GGUF/resolve/main/Kenko-mental-health-llama-3-model.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kenko-mental-health-llama-3-model-GGUF/resolve/main/Kenko-mental-health-llama-3-model.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kenko-mental-health-llama-3-model-GGUF/resolve/main/Kenko-mental-health-llama-3-model.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kenko-mental-health-llama-3-model-GGUF/resolve/main/Kenko-mental-health-llama-3-model.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kenko-mental-health-llama-3-model-GGUF/resolve/main/Kenko-mental-health-llama-3-model.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kenko-mental-health-llama-3-model-GGUF/resolve/main/Kenko-mental-health-llama-3-model.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Kenko-mental-health-llama-3-model-GGUF/resolve/main/Kenko-mental-health-llama-3-model.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Kenko-mental-health-llama-3-model-GGUF/resolve/main/Kenko-mental-health-llama-3-model.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
armghan23/finetuned-small-model
|
armghan23
| 2025-09-19T09:39:12Z | 113 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"safetensors",
"llama",
"random",
"test",
"text-generation",
"en",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] |
text-generation
| 2025-09-16T13:40:10Z |
---
license: llama3.1
language:
- en
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
library_name: adapter-transformers
tags:
- random
- test
---
## Description
This model is finetuned on Llama-3.1-8B-Instruct. This is for testing purposes only.
|
aamijar/MaskLLM-Llama-2-7b-hf-lora-r8-mrpc-epochs2
|
aamijar
| 2025-09-19T09:37:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T09:37:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
selsar/labor_market_position
|
selsar
| 2025-09-19T09:30:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-19T09:29:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tomaarsen/mdbr-leaf-mt-asym
|
tomaarsen
| 2025-09-19T09:25:34Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"transformers",
"sentence-similarity",
"feature-extraction",
"text-embeddings-inference",
"information-retrieval",
"knowledge-distillation",
"en",
"arxiv:2509.12539",
"arxiv:2205.13147",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-09-19T09:23:33Z |
---
license: apache-2.0
base_model: microsoft/MiniLM-L6-v2
tags:
- transformers
- sentence-transformers
- sentence-similarity
- feature-extraction
- text-embeddings-inference
- information-retrieval
- knowledge-distillation
language:
- en
---
> [!Warning]
> I am not affiliated with MongoDB. This model is an implementation of the asymmetric strategy as described in [`mdbr-leaf-mt`](https://huggingface.co/MongoDB/mdbr-leaf-mt), designed for the MongoDB team to adopt/clone.
<div style="display: flex; justify-content: center;">
<div style="display: flex; align-items: center; gap: 10px;">
<img src="logo.webp" alt="MongoDB Logo" style="height: 36px; width: auto; border-radius: 4px;">
<span style="font-size: 32px; font-weight: bold">MongoDB/mdbr-leaf-mt-asym</span>
</div>
</div>
# Content
1. [Introduction](#introduction)
2. [Technical Report](#technical-report)
3. [Highlights](#highlights)
4. [Benchmarks](#benchmark-comparison)
5. [Quickstart](#quickstart)
6. [Citation](#citation)
# Introduction
`mdbr-leaf-mt-asym` is a compact high-performance text embedding model designed for classification, clustering, semantic sentence similarity and summarization tasks.
This model is the asymmetric variant of `mdbr-leaf-mt`, which uses [`MongoDB/mdbr-leaf-mt`](https://huggingface.co/MongoDB/mdbr-leaf-mt) for queries and [`mixedbread-ai/mxbai-embed-large-v1`](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) for documents.
The model is robust to [vector quantization](#vector-quantization) and [MRL truncation](#mrl-truncation).
If you are looking to perform semantic search / information retrieval (e.g. for RAGs), please check out our [`mdbr-leaf-ir`](https://huggingface.co/MongoDB/mdbr-leaf-ir) model, which is specifically trained for these tasks.
> [!Note]
> **Note**: this model has been developed by the ML team of MongoDB Research. At the time of writing it is not used in any of MongoDB's commercial product or service offerings.
# Technical Report
A technical report detailing our proposed `LEAF` training procedure is [available here](https://arxiv.org/abs/2509.12539).
# Highlights
* **State-of-the-Art Performance**: `mdbr-leaf-mt-asym` achieves state-of-the-art results for compact embedding models, **ranking #1** on the public [BEIR benchmark leaderboard](https://huggingface.co/spaces/mteb/leaderboard) for models with ≤100M parameters.
* **Flexible Architecture Support**: `mdbr-leaf-mt-asym` uses an asymmetric retrieval architecture enabling even greater retrieval results.
* **MRL and Quantization Support**: embedding vectors generated by `mdbr-leaf-mt-asym` compress well when truncated (MRL) and can be stored using more efficient types like `int8` and `binary`. [See below](#mrl-truncation) for more information.
## Benchmark Comparison
The table below shows the scores for `mdbr-leaf-mt` on the MTEB v2 (English) benchmark, compared to other retrieval models.
`mdbr-leaf-mt` ranks #1 on this benchmark for models with <30M parameters.
| Model | Size | MTEB v2 (Eng) |
|------------------------------------|---------|---------------|
| OpenAI text-embedding-3-large | Unknown | 66.43 |
| OpenAI text-embedding-3-small | Unknown | 64.56 |
| **mdbr-leaf-mt** | 23M | **63.97** |
| gte-small | 33M | 63.22 |
| snowflake-arctic-embed-s | 32M | 61.59 |
| e5-small-v2 | 33M | 61.32 |
| granite-embedding-small-english-r2 | 47M | 61.07 |
| all-MiniLM-L6-v2 | 22M | 59.03 |
# Quickstart
## Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer("MongoDB/mdbr-leaf-mt-asym")
# Example queries and documents
queries = [
"What is machine learning?",
"How does neural network training work?",
]
documents = [
"Machine learning is a subset of artificial intelligence that focuses on algorithms that can learn from data.",
"Neural networks are trained through backpropagation, adjusting weights to minimize prediction errors.",
]
# Encode queries and documents
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
# Compute similarity scores
scores = model.similarity(query_embeddings, document_embeddings)
# Print results
for i, query in enumerate(queries):
print(f"Query: {query}")
for j, doc in enumerate(documents):
print(f" Similarity: {scores[i, j]:.4f} | Document {j}: {doc[:80]}...")
# Query: What is machine learning?
# Similarity: 0.8483 | Document 0: Machine learning is a subset of artificial intelligence that focuses on algorith...
# Similarity: 0.6805 | Document 1: Neural networks are trained through backpropagation, adjusting weights to minimi...
# Query: How does neural network training work?
# Similarity: 0.6050 | Document 0: Machine learning is a subset of artificial intelligence that focuses on algorith...
# Similarity: 0.7689 | Document 1: Neural networks are trained through backpropagation, adjusting weights to minimi...
```
## Transformers Usage
See [here](https://huggingface.co/MongoDB/mdbr-leaf-mt/blob/main/transformers_example_mt.ipynb).
## Asymmetric Retrieval Setup
`mdbr-leaf-mt` is *aligned* to [`mxbai-embed-large-v1`](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1), the model it has been distilled from. This enables flexible architectures in which, for example, documents are encoded using the larger model, while queries can be encoded faster and more efficiently with the compact `leaf` model. This generally outperforms the symmetric setup in which both queries and documents are encoded with `leaf`.
To use exclusively the leaf model, use [`mdbr-leaf-mt`](https://huggingface.co/MongoDB/mdbr-leaf-mt).
## MRL Truncation
Embeddings have been trained via [MRL](https://arxiv.org/abs/2205.13147) and can be truncated for more efficient storage:
```python
query_embeds = model.encode_query(queries, truncate_dim=256)
doc_embeds = model.encode_document(documents, truncate_dim=256)
similarities = model.similarity(query_embeds, doc_embeds)
print('After MRL:')
print(f"* Embeddings dimension: {query_embeds.shape[1]}")
print(f"* Similarities:\n{similarities}")
# After MRL:
# * Embeddings dimension: 256
# * Similarities:
# tensor([[0.8584, 0.6921],
# [0.5973, 0.7893]])
```
## Vector Quantization
Vector quantization, for example to `int8` or `binary`, can be performed as follows:
**Note**: For vector quantization to types other than binary, we suggest performing a calibration to determine the optimal ranges, [see here](https://sbert.net/examples/sentence_transformer/applications/embedding-quantization/README.html#scalar-int8-quantization).
Good initial values are -1.0 and +1.0.
```python
from sentence_transformers.quantization import quantize_embeddings
import torch
query_embeds = model.encode(queries, prompt_name="query")
doc_embeds = model.encode(documents)
# Quantize embeddings to int8 using -1.0 and +1.0
ranges = torch.tensor([[-1.0], [+1.0]]).expand(2, query_embeds.shape[1]).cpu().numpy()
query_embeds = quantize_embeddings(query_embeds, "int8", ranges=ranges)
doc_embeds = quantize_embeddings(doc_embeds, "int8", ranges=ranges)
# Calculate similarities; cast to int64 to avoid under/overflow
similarities = query_embeds.astype(int) @ doc_embeds.astype(int).T
print('After quantization:')
print(f"* Embeddings type: {query_embeds.dtype}")
print(f"* Similarities:\n{similarities}")
# After quantization:
# * Embeddings type: int8
# * Similarities:
# [[2202032 1422868]
# [1421197 1845580]]
```
# Evaluation
Please [see here](https://huggingface.co/MongoDB/mdbr-leaf-mt/blob/main/evaluate_models.ipynb).
# Citation
If you use this model in your work, please cite:
```bibtex
@misc{mdbr_leaf,
title={LEAF: Knowledge Distillation of Text Embedding Models with Teacher-Aligned Representations},
author={Robin Vujanic and Thomas Rueckstiess},
year={2025},
eprint={2509.12539},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2509.12539},
}
```
# License
This model is released under Apache 2.0 License.
# Contact
For questions or issues, please open an issue or pull request. You can also contact the MongoDB ML research team at [email protected].
|
lerobot/pi0
|
lerobot
| 2025-09-19T09:18:38Z | 10,879 | 294 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"arxiv:2410.24164",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-02-03T10:01:16Z |
---
license: apache-2.0
library_name: lerobot
pipeline_tag: robotics
---
## Pi0 pretrained model
This repository contains the model described in [π_0: A Vision-Language-Action Flow Model for General Robot Control](https://huggingface.co/papers/2410.24164).
See the [Twitter thread](https://x.com/RemiCadene/status/1886823939856589296) and [blog post](https://huggingface.co/blog/pi0) for more info regarding its integration in [LeRobot](https://github.com/huggingface/lerobot).
## Usage
You can download and use this model with:
```python
policy = Pi0Policy.from_pretrained("lerobot/pi0")
action = policy.select_action(batch)
```
## Fine-tuning
You can easily finetune it on your dataset. For instance on @dana_55517 's [dataset](https://huggingface.co/spaces/lerobot/visualize_dataset?dataset=danaaubakirova%2Fkoch_test&episode=0):
```python
python lerobot/scripts/train.py \
--policy.path=lerobot/pi0 \
--dataset.repo_id=danaaubakirova/koch_test
```
Take a look at the [code](https://github.com/huggingface/lerobot/blob/main/lerobot/common/policies/pi0/modeling_pi0.py) regarding the implementation.
|
ellisdoro/chiro-all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T09:15:38Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-additive",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T09:15:35Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-additive
- gnn-gcn
- small-ontology
---
# chiro_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: chiro.owl
- **Domain**: general
- **Ontology Concepts**: 26
- **Concept Alignment**: 26/26 (100.0%)
- **Fusion Method**: additive
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 26
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.2 MB
- **Model Size**: 87.8 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 26 concepts → GNN → 64 output
- Fusion: additive → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('chiro_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/chiro-all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e512_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T09:15:31Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-additive",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T09:15:28Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-additive
- gnn-gcn
- small-ontology
---
# chiro_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e512_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: chiro.owl
- **Domain**: general
- **Ontology Concepts**: 26
- **Concept Alignment**: 26/26 (100.0%)
- **Fusion Method**: additive
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 26
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.2 MB
- **Model Size**: 87.8 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 26 concepts → GNN → 64 output
- Fusion: additive → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('chiro_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e512_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/ceph-all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T09:14:47Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-additive",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T09:14:44Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-additive
- gnn-gcn
- small-ontology
---
# ceph_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: ceph.owl
- **Domain**: general
- **Ontology Concepts**: 330
- **Concept Alignment**: 330/330 (100.0%)
- **Fusion Method**: additive
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 330
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.6 MB
- **Model Size**: 90.2 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 330 concepts → GNN → 64 output
- Fusion: additive → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('ceph_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/ceph-all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e128_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T09:13:55Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-cross_attention",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T09:13:52Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-cross_attention
- gnn-gcn
- small-ontology
---
# ceph_all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e128_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: ceph.owl
- **Domain**: general
- **Ontology Concepts**: 330
- **Concept Alignment**: 330/330 (100.0%)
- **Fusion Method**: cross_attention
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 330
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.6 MB
- **Model Size**: 93.6 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 330 concepts → GNN → 64 output
- Fusion: cross_attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('ceph_all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e128_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/cdao-all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e512_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T09:13:34Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-additive",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T09:13:32Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-additive
- gnn-gcn
- small-ontology
---
# cdao_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e512_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: cdao.owl
- **Domain**: general
- **Ontology Concepts**: 131
- **Concept Alignment**: 131/131 (100.0%)
- **Fusion Method**: additive
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 131
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.1 MB
- **Model Size**: 88.6 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 131 concepts → GNN → 64 output
- Fusion: additive → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('cdao_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e512_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/bfo-all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T09:12:45Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-additive",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T09:12:43Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-additive
- gnn-gcn
- small-ontology
---
# bfo_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: bfo.owl
- **Domain**: general
- **Ontology Concepts**: 35
- **Concept Alignment**: 35/35 (100.0%)
- **Fusion Method**: additive
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 35
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.2 MB
- **Model Size**: 87.8 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 35 concepts → GNN → 64 output
- Fusion: additive → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('bfo_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/bfo-all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e1024_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T09:12:19Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-cross_attention",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T09:12:16Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-cross_attention
- gnn-gcn
- small-ontology
---
# bfo_all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e1024_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: bfo.owl
- **Domain**: general
- **Ontology Concepts**: 35
- **Concept Alignment**: 35/35 (100.0%)
- **Fusion Method**: cross_attention
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 35
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.2 MB
- **Model Size**: 91.2 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 35 concepts → GNN → 64 output
- Fusion: cross_attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('bfo_all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e1024_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/bcgo-all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e128_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T09:11:21Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-additive",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T09:11:16Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-additive
- gnn-gcn
- medium-ontology
---
# bcgo_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e128_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: bcgo.owl
- **Domain**: general
- **Ontology Concepts**: 2,270
- **Concept Alignment**: 2,270/2,270 (100.0%)
- **Fusion Method**: additive
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 2270
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 3.1 MB
- **Model Size**: 105.5 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 2270 concepts → GNN → 64 output
- Fusion: additive → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('bcgo_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e128_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/aism-all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e512_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T09:02:52Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-cross_attention",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T09:02:42Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-cross_attention
- gnn-gcn
- medium-ontology
---
# aism_all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e512_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: aism.owl
- **Domain**: general
- **Ontology Concepts**: 8,540
- **Concept Alignment**: 8,540/8,540 (100.0%)
- **Fusion Method**: cross_attention
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 8540
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 28.8 MB
- **Model Size**: 158.5 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 8540 concepts → GNN → 64 output
- Fusion: cross_attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('aism_all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e512_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/agro-all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T09:01:10Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-additive",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T09:01:05Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-additive
- gnn-gcn
- medium-ontology
---
# agro_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: agro.owl
- **Domain**: general
- **Ontology Concepts**: 4,162
- **Concept Alignment**: 4,162/4,162 (100.0%)
- **Fusion Method**: additive
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 4162
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 7.2 MB
- **Model Size**: 120.6 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 4162 concepts → GNN → 64 output
- Fusion: additive → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('agro_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/agro-all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e1024_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T09:00:08Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-cross_attention",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T09:00:02Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-cross_attention
- gnn-gcn
- medium-ontology
---
# agro_all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e1024_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: agro.owl
- **Domain**: general
- **Ontology Concepts**: 4,162
- **Concept Alignment**: 4,162/4,162 (100.0%)
- **Fusion Method**: cross_attention
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 4162
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 7.2 MB
- **Model Size**: 123.8 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 4162 concepts → GNN → 64 output
- Fusion: cross_attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('agro_all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e1024_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
ellisdoro/afpo-all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early-on2vec-koji-early
|
ellisdoro
| 2025-09-19T08:59:00Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-additive",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T08:58:58Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-additive
- gnn-gcn
- small-ontology
---
# afpo_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: afpo.owl
- **Domain**: general
- **Ontology Concepts**: 473
- **Concept Alignment**: 473/473 (100.0%)
- **Fusion Method**: additive
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 473
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 512
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 1.3 MB
- **Model Size**: 91.3 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Simple concatenation of text and ontological embeddings
**Embedding Flow:**
- Text: 384 dimensions → 512 hidden → 64 output
- Structure: 473 concepts → GNN → 64 output
- Fusion: additive → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('afpo_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758272133
|
schooncestiaa
| 2025-09-19T08:56:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-19T08:56:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FreedomIntelligence/EchoX-8B
|
FreedomIntelligence
| 2025-09-19T08:36:41Z | 74 | 8 | null |
[
"safetensors",
"EchoX",
"audio-text-to-audio-text",
"speech-understanding",
"audio",
"chat",
"en",
"dataset:custom",
"arxiv:2509.09174",
"license:apache-2.0",
"region:us"
] | null | 2025-09-04T11:01:11Z |
---
language:
- en
tags:
- audio-text-to-audio-text
- speech-understanding
- audio
- chat
license: apache-2.0
datasets:
- custom
metrics:
- wer
- bleu
- AIR-Bench
---
<div align="center">
<h1>
EchoX: Towards Mitigating Acoustic-Semantic Gap via Echo Training for Speech-to-Speech LLMs
</h1>
</div>
<p align="center">
<font size="3">
<a href="https://github.com/FreedomIntelligence/EchoX">🐈⬛ Github</a> | 
<a href="https://arxiv.org/abs/2509.09174">📃 Paper</a> | 
<a href="https://huggingface.co/spaces/FreedomIntelligence/EchoX">🚀 Space</a> | 
<a href="https://huggingface.co/datasets/FreedomIntelligence/EchoX-Dialougues">📊 EchoX-Dialougues</a> | 
<a href="https://huggingface.co/datasets/KurtDu/EchoX-Dialogues-Plus">📊 EchoX-Dialogues-Plus</a>
</font>
</p>
## Model Description
EchoX is a Speech-to-Speech large language model that addresses the acoustic-semantic gap. By introducing **Echo Training**, EchoX integrates semantic and acoustic learning, mitigating the degradation of reasoning ability observed in existing speech-based LLMs. It is trained on only 10k hours of data while delivering state-of-the-art results in knowledge-based question answering and speech interaction tasks.
### Key Features
<div>
<ul>
<font size="3"><li>Mitigates Acoustic-Semantic Gap in Speech-to-Speech LLMs</li></font>
<font size="3"><li>Introduces Echo Training with a Novel Three-Stage Pipeline (S2T, T2C, Echo)</li></font>
<font size="3"><li>Trained on Only 10k Hours of Curated Data, Ensuring Efficiency</li></font>
<font size="3"><li>Achieves State-of-the-Art Performance in Knowledge-Based QA Benchmarks</li></font>
<font size="3"><li>Preserves Reasoning and Knowledge Abilities for Interactive Speech Tasks</li></font>
</ul>
</div>
## Usage
Load the EchoX model and run inference with your audio files as shown in the <a href="https://github.com/FreedomIntelligence/EchoX">GitHub repository</a>.
# <span>📖 Citation</span>
```
@misc{zhang2025echoxmitigatingacousticsemanticgap,
title={EchoX: Towards Mitigating Acoustic-Semantic Gap via Echo Training for Speech-to-Speech LLMs},
author={Yuhao Zhang and Yuhao Du and Zhanchen Dai and Xiangnan Ma and Kaiqi Kou and Benyou Wang and Haizhou Li},
year={2025},
eprint={2509.09174},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.09174},
}
```
|
Zacktree/code-bench-CodeGemma-7B-cg-nv9n
|
Zacktree
| 2025-09-19T08:35:19Z | 41 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/codegemma-7b",
"base_model:adapter:google/codegemma-7b",
"license:gemma",
"region:us"
] | null | 2025-09-18T09:16:03Z |
---
library_name: peft
license: gemma
base_model: google/codegemma-7b
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: code-bench-CodeGemma-7B-cg-nv9n
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-bench-CodeGemma-7B-cg-nv9n
This model is a fine-tuned version of [google/codegemma-7b](https://huggingface.co/google/codegemma-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 3
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.623 | 0.0530 | 50 | 0.5961 |
| 0.473 | 0.1061 | 100 | 0.4669 |
| 0.4018 | 0.1591 | 150 | 0.3573 |
| 0.3249 | 0.2121 | 200 | 0.2696 |
| 0.2703 | 0.2652 | 250 | 0.2265 |
| 0.2066 | 0.3182 | 300 | 0.1859 |
| 0.1725 | 0.3713 | 350 | 0.1526 |
| 0.1588 | 0.4243 | 400 | 0.1341 |
| 0.1467 | 0.4773 | 450 | 0.1278 |
| 0.1349 | 0.5304 | 500 | 0.1229 |
| 0.1541 | 0.5834 | 550 | 0.1170 |
| 0.1262 | 0.6364 | 600 | 0.1137 |
| 0.1273 | 0.6895 | 650 | 0.1118 |
| 0.1322 | 0.7425 | 700 | 0.1091 |
| 0.1244 | 0.7955 | 750 | 0.1069 |
| 0.1134 | 0.8486 | 800 | 0.1043 |
| 0.1206 | 0.9016 | 850 | 0.1030 |
| 0.117 | 0.9547 | 900 | 0.1016 |
| 0.101 | 1.0077 | 950 | 0.1003 |
| 0.1095 | 1.0607 | 1000 | 0.0999 |
| 0.0989 | 1.1138 | 1050 | 0.0991 |
| 0.1054 | 1.1668 | 1100 | 0.0972 |
| 0.1073 | 1.2198 | 1150 | 0.0964 |
| 0.1057 | 1.2729 | 1200 | 0.0951 |
| 0.111 | 1.3259 | 1250 | 0.0953 |
| 0.0946 | 1.3789 | 1300 | 0.0935 |
| 0.0907 | 1.4320 | 1350 | 0.0926 |
| 0.0989 | 1.4850 | 1400 | 0.0919 |
| 0.1037 | 1.5381 | 1450 | 0.0905 |
| 0.0955 | 1.5911 | 1500 | 0.0899 |
| 0.0893 | 1.6441 | 1550 | 0.0887 |
| 0.1015 | 1.6972 | 1600 | 0.0881 |
| 0.0952 | 1.7502 | 1650 | 0.0874 |
| 0.0918 | 1.8032 | 1700 | 0.0868 |
| 0.0926 | 1.8563 | 1750 | 0.0860 |
| 0.0886 | 1.9093 | 1800 | 0.0852 |
| 0.0989 | 1.9623 | 1850 | 0.0841 |
| 0.0863 | 2.0154 | 1900 | 0.0839 |
| 0.0821 | 2.0684 | 1950 | 0.0836 |
| 0.0964 | 2.1215 | 2000 | 0.0830 |
| 0.0755 | 2.1745 | 2050 | 0.0824 |
| 0.0777 | 2.2275 | 2100 | 0.0817 |
| 0.0761 | 2.2806 | 2150 | 0.0807 |
| 0.0777 | 2.3336 | 2200 | 0.0802 |
| 0.0857 | 2.3866 | 2250 | 0.0795 |
| 0.0883 | 2.4397 | 2300 | 0.0793 |
| 0.0784 | 2.4927 | 2350 | 0.0785 |
| 0.0774 | 2.5457 | 2400 | 0.0779 |
| 0.0776 | 2.5988 | 2450 | 0.0772 |
| 0.0788 | 2.6518 | 2500 | 0.0770 |
| 0.0853 | 2.7064 | 2550 | 0.0768 |
| 0.0836 | 2.7595 | 2600 | 0.0764 |
| 0.0822 | 2.8125 | 2650 | 0.0758 |
| 0.0862 | 2.8656 | 2700 | 0.0755 |
| 0.0753 | 2.9186 | 2750 | 0.0750 |
| 0.0798 | 2.9716 | 2800 | 0.0744 |
| 0.0762 | 3.0247 | 2850 | 0.0741 |
| 0.0884 | 3.0777 | 2900 | 0.0736 |
| 0.0753 | 3.1307 | 2950 | 0.0731 |
| 0.0774 | 3.1838 | 3000 | 0.0727 |
| 0.0753 | 3.2368 | 3050 | 0.0725 |
| 0.0853 | 3.2898 | 3100 | 0.0723 |
| 0.0723 | 3.3429 | 3150 | 0.0718 |
| 0.0762 | 3.3959 | 3200 | 0.0713 |
| 0.0737 | 3.4490 | 3250 | 0.0712 |
| 0.0751 | 3.5020 | 3300 | 0.0705 |
| 0.0737 | 3.5550 | 3350 | 0.0700 |
| 0.069 | 3.6081 | 3400 | 0.0701 |
| 0.0696 | 3.6611 | 3450 | 0.0697 |
| 0.0725 | 3.7141 | 3500 | 0.0692 |
| 0.074 | 3.7672 | 3550 | 0.0686 |
| 0.0655 | 3.8202 | 3600 | 0.0684 |
| 0.0671 | 3.8732 | 3650 | 0.0679 |
| 0.0642 | 3.9263 | 3700 | 0.0676 |
| 0.07 | 3.9793 | 3750 | 0.0676 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.5.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
NandhasuriyaS/qad-gpt-oss-20b-298-deq
|
NandhasuriyaS
| 2025-09-19T08:33:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T08:31:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chriswang2025/wan2-2-ev-3
|
chriswang2025
| 2025-09-19T08:25:46Z | 0 | 0 | null |
[
"feature",
"training",
"new",
"wavespeed",
"license:other",
"region:us"
] | null | 2025-09-19T08:25:03Z |
---
tags:
- feature
- training
- new
- wavespeed
base_model: undefined
instance_prompt: trigger_word
license: other
---
# wavespeed-ai/wan-2.2-i2v-lora-trainer
<Gallery />
## Model description
## Trigger words
You should use `trigger_word` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/chriswang2025/wan2-2-ev-3/tree/main) them in the Files & versions tab.
## Training at wavespeed.ai
Training was done using [wavespeed.ai/models/wavespeed-ai/wan-2.2-i2v-lora-trainer](https://wavespeed.ai/models/wavespeed-ai/wan-2.2-i2v-lora-trainer).
|
tomal66/qwen3-0.6b-sarcasm-fpt-sft
|
tomal66
| 2025-09-19T08:02:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T08:02:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bappy2001/banglat5-bangladata
|
bappy2001
| 2025-09-19T08:00:44Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:csebuetnlp/banglat5",
"base_model:finetune:csebuetnlp/banglat5",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T07:59:38Z |
---
library_name: transformers
base_model: csebuetnlp/banglat5
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: banglat5-bangladata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# banglat5-bangladata
This model is a fine-tuned version of [csebuetnlp/banglat5](https://huggingface.co/csebuetnlp/banglat5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1027
- Rouge1: 1.5873
- Rouge2: 0.0
- Rougel: 1.5873
- Rougelsum: 1.5873
- Gen Len: 12.7143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 8.6168 | 1.0 | 754 | 4.3285 | 1.5873 | 0.0 | 1.5873 | 1.5873 | 14.1786 |
| 4.8514 | 2.0 | 1508 | 4.1374 | 1.5873 | 0.0 | 1.5873 | 1.5873 | 12.4881 |
| 4.7491 | 3.0 | 2262 | 4.1027 | 1.5873 | 0.0 | 1.5873 | 1.5873 | 12.7143 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.1.1
- Tokenizers 0.22.0
|
MrDave/gemma270m-fiorentino-lora
|
MrDave
| 2025-09-19T07:42:48Z | 2 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T11:07:23Z |
---
base_model: unsloth/gemma-3-270m-it
library_name: transformers
model_name: gemma270m-fiorentino-lora
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma270m-fiorentino-lora
This model is a fine-tuned version of [unsloth/gemma-3-270m-it](https://huggingface.co/unsloth/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MrDave/gemma270m-fiorentino-lora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
wangjian21/Nudity
|
wangjian21
| 2025-09-19T07:28:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-09-19T07:06:04Z |
---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: nudity
tags:
- text-to-image
- diffusers
- lora
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - wangjian21/Nudity
These are LoRA adaption weights for stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on nudity using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
cuongdk253/gemma3-4b-bnb-4bit
|
cuongdk253
| 2025-09-19T06:49:16Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
image-text-to-text
| 2025-09-19T04:33:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Doomerang/llama2-finetune-kl-100edits
|
Doomerang
| 2025-09-19T06:43:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T06:41:15Z |
---
library_name: transformers
license: llama2
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: llama2-finetune-kl-100edits
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-finetune-kl-100edits
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.22.0
|
AniruddhaSangule/genz_GPT
|
AniruddhaSangule
| 2025-09-19T06:42:24Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T06:31:30Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: genz_GPT
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for genz_GPT
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AniruddhaSangule/genz_GPT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
BKM1804/35d638ba-a7d3-49da-ba50-ff8df29418f0
|
BKM1804
| 2025-09-19T06:30:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"grpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T05:43:07Z |
---
library_name: transformers
tags:
- trl
- grpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Wwayu/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER-mlx-8Bit
|
Wwayu
| 2025-09-19T06:29:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"programming",
"code generation",
"code",
"codeqwen",
"moe",
"coding",
"coder",
"qwen2",
"chat",
"qwen",
"qwen-coder",
"Qwen3-30B-A3B-Thinking-2507",
"Qwen3-30B-A3B",
"mixture of experts",
"128 experts",
"10 active experts",
"256k context",
"qwen3",
"finetune",
"brainstorm 40x",
"brainstorm",
"thinking",
"reasoning",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"fr",
"zh",
"de",
"base_model:DavidAU/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER",
"base_model:quantized:DavidAU/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-09-19T06:23:43Z |
---
license: apache-2.0
library_name: transformers
language:
- en
- fr
- zh
- de
tags:
- programming
- code generation
- code
- codeqwen
- moe
- coding
- coder
- qwen2
- chat
- qwen
- qwen-coder
- Qwen3-30B-A3B-Thinking-2507
- Qwen3-30B-A3B
- mixture of experts
- 128 experts
- 10 active experts
- 256k context
- qwen3
- finetune
- brainstorm 40x
- brainstorm
- thinking
- reasoning
- qwen3_moe
- mlx
- mlx-my-repo
base_model: DavidAU/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER
pipeline_tag: text-generation
---
# Wwayu/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER-mlx-8Bit
The Model [Wwayu/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER-mlx-8Bit](https://huggingface.co/Wwayu/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER-mlx-8Bit) was converted to MLX format from [DavidAU/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER](https://huggingface.co/DavidAU/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER) using mlx-lm version **0.26.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Wwayu/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
sumanthmandavalli/SmallMedLM
|
sumanthmandavalli
| 2025-09-19T06:08:19Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"medical",
"disease",
"symptoms",
"treatment",
"dataset:QuyenAnhDE/Diseases_Symptoms",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T05:27:08Z |
---
tags:
- medical
- disease
- symptoms
- treatment
- gpt2
license: apache-2.0
datasets:
- QuyenAnhDE/Diseases_Symptoms
---
# SmallMedLM
**SmallMedLM** is a fine-tuned [`distilgpt2`](https://huggingface.co/distilgpt2) model trained on medical text data about diseases, symptoms, and treatments.
---
## Model Description
This model is designed for **generating medical information** given a disease or symptom prompt.
It can output possible **symptoms** for a disease or suggest **treatment directions** based on symptoms.
⚠️ **Disclaimer**: This model is for research/educational purposes only. It is **not a substitute for professional medical advice**. Always consult a qualified healthcare professional.
---
## Training Data
- Dataset: [Diseases_Symptoms](https://huggingface.co/datasets/QuyenAnhDE/Diseases_Symptoms)
- Domain: Disease → Symptoms → Treatment mapping
- Base model: `distilgpt2`
---
## Usage
### Inference Example
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model_name = "sumanthmandavalli/SmallMedLM"
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name)
def generate_medical_info(disease_name, max_length=100):
prompt = f"Disease: {disease_name} | Symptoms: "
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(
inputs,
max_length=max_length,
num_return_sequences=1,
no_repeat_ngram_size=2,
top_k=50,
top_p=0.95,
temperature=0.7,
do_sample=True
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generate_medical_info("Diabetes"))
|
RedHatAI/DeepSeek-R1-0528-quantized.w4a16
|
RedHatAI
| 2025-09-19T06:05:48Z | 1,954 | 9 | null |
[
"safetensors",
"deepseek_v3",
"deepseek",
"neuralmagic",
"redhat",
"llmcompressor",
"quantized",
"INT4",
"GPTQ",
"conversational",
"compressed-tensors",
"text-generation",
"custom_code",
"en",
"base_model:deepseek-ai/DeepSeek-R1-0528",
"base_model:quantized:deepseek-ai/DeepSeek-R1-0528",
"license:mit",
"region:us"
] |
text-generation
| 2025-05-30T16:14:36Z |
---
language:
- en
base_model:
- deepseek-ai/DeepSeek-R1-0528
pipeline_tag: text-generation
tags:
- deepseek_v3
- deepseek
- neuralmagic
- redhat
- llmcompressor
- quantized
- INT4
- GPTQ
- conversational
- compressed-tensors
license: mit
license_name: mit
name: RedHatAI/DeepSeek-R1-0528-quantized.w4a16
description: This model was obtained by quantizing weights of DeepSeek-R1-0528 to INT4 data type.
readme: https://huggingface.co/RedHatAI/DeepSeek-R1-0528-quantized.w4a16/main/README.md
tasks:
- text-to-text
provider: DeepSeek
license_link: https://choosealicense.com/licenses/mit/
---
<h1 style="display: flex; align-items: center; gap: 10px; margin: 0;">
DeepSeek-R1-0528-quantized.w4a16
<img src="https://www.redhat.com/rhdc/managed-files/Catalog-Validated_model_0.png" alt="Model Icon" width="40" style="margin: 0; padding: 0;" />
</h1>
<a href="https://www.redhat.com/en/products/ai/validated-models" target="_blank" style="margin: 0; padding: 0;">
<img src="https://www.redhat.com/rhdc/managed-files/Validated_badge-Dark.png" alt="Validated Badge" width="250" style="margin: 0; padding: 0;" />
</a>
## Model Overview
- **Model Architecture:** DeepseekV3ForCausalLM
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Activation quantization:** None
- **Weight quantization:** INT4
- **Release Date:** 05/30/2025
- **Version:** 1.0
- **Validated on:** RHOAI 2.24, RHAIIS 3.2.1
- **Model Developers:** Red Hat (Neural Magic)
### Model Optimizations
This model was obtained by quantizing weights of [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) to INT4 data type.
This optimization reduces the number of bits used to represent weights from 8 to 4, reducing GPU memory requirements (by approximately 50%).
Weight quantization also reduces disk size requirements by approximately 50%.
## Deployment
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "RedHatAI/DeepSeek-R1-0528-quantized.w4a16"
number_gpus = 8
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Give me a short introduction to large language model."
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompt, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
<details>
<summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
```bash
podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
--ipc=host \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
--name=vllm \
registry.access.redhat.com/rhaiis/rh-vllm-cuda \
vllm serve \
--tensor-parallel-size 8 \
--max-model-len 32768 \
--enforce-eager --model RedHatAI/DeepSeek-R1-0528-quantized.w4a16
```
</details>
<details>
<summary>Deploy on <strong>Red Hat Openshift AI</strong></summary>
```python
# Setting up vllm server with ServingRuntime
# Save as: vllm-servingruntime.yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
annotations:
openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
labels:
opendatahub.io/dashboard: 'true'
spec:
annotations:
prometheus.io/port: '8080'
prometheus.io/path: '/metrics'
multiModel: false
supportedModelFormats:
- autoSelect: true
name: vLLM
containers:
- name: kserve-container
image: quay.io/modh/vllm:rhoai-2.24-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.24-rocm
command:
- python
- -m
- vllm.entrypoints.openai.api_server
args:
- "--port=8080"
- "--model=/mnt/models"
- "--served-model-name={{.Name}}"
env:
- name: HF_HOME
value: /tmp/hf_home
ports:
- containerPort: 8080
protocol: TCP
```
```python
# Attach model to vllm server. This is an NVIDIA template
# Save as: inferenceservice.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
annotations:
openshift.io/display-name: DeepSeek-R1-0528-quantized.w4a16 # OPTIONAL CHANGE
serving.kserve.io/deploymentMode: RawDeployment
name: DeepSeek-R1-0528-quantized.w4a16 # specify model name. This value will be used to invoke the model in the payload
labels:
opendatahub.io/dashboard: 'true'
spec:
predictor:
maxReplicas: 1
minReplicas: 1
model:
modelFormat:
name: vLLM
name: ''
resources:
limits:
cpu: '2' # this is model specific
memory: 8Gi # this is model specific
nvidia.com/gpu: '1' # this is accelerator specific
requests: # same comment for this block
cpu: '1'
memory: 4Gi
nvidia.com/gpu: '1'
runtime: vllm-cuda-runtime # must match the ServingRuntime name above
storageUri: oci://registry.redhat.io/rhelai1/modelcar-deepseek-r1-0528-quantized-w4a16:1.5
tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
```
```bash
# make sure first to be in the project where you want to deploy the model
# oc project <project-name>
# apply both resources to run model
# Apply the ServingRuntime
oc apply -f vllm-servingruntime.yaml
# Apply the InferenceService
oc apply -f qwen-inferenceservice.yaml
```
```python
# Replace <inference-service-name> and <cluster-ingress-domain> below:
# - Run `oc get inferenceservice` to find your URL if unsure.
# Call the server using curl:
curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions
-H "Content-Type: application/json" \
-d '{
"model": "DeepSeek-R1-0528-quantized.w4a16",
"stream": true,
"stream_options": {
"include_usage": true
},
"max_tokens": 1,
"messages": [
{
"role": "user",
"content": "How can a bee fly when its wings are so small?"
}
]
}'
```
See [Red Hat Openshift AI documentation](https://docs.redhat.com/en/documentation/red_hat_openshift_ai/2025) for more details.
</details>
## Creation
We created this model using **MoE-Quant**, a library developed jointly with **ISTA** and tailored for the quantization of very large Mixture-of-Experts (MoE) models.
For more details, please refer to the [MoE-Quant repository](https://github.com/IST-DASLab/MoE-Quant).
## Evaluation
The model was evaluated on popular reasoning tasks (AIME 2024, MATH-500, GPQA-Diamond) via [LightEval](https://github.com/huggingface/open-r1).
For reasoning evaluations, we estimate pass@1 based on 10 runs with different seeds, `temperature=0.6`, `top_p=0.95` and `max_new_tokens=65536`.
### Accuracy
| | Recovery (%) | deepseek/DeepSeek-R1-0528 | RedHatAI/DeepSeek-R1-0528-quantized.w4a16<br>(this model) |
| --------------------------- | :----------: | :------------------: | :--------------------------------------------------: |
| AIME 2024<br>pass@1 | 98.50 | 88.66 | 87.33 |
| MATH-500<br>pass@1 | 99.88 | 97.52 | 97.40 |
| GPQA Diamond<br>pass@1 | 101.21 | 79.65 | 80.61 |
| **Reasoning<br>Average Score** | **99.82** | **88.61** | **88.45** |
|
mudigiriakshaya/my_tts_model
|
mudigiriakshaya
| 2025-09-19T05:56:43Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/orpheus-3b-0.1-ft",
"lora",
"transformers",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/orpheus-3b-0.1-ft",
"region:us"
] |
text-generation
| 2025-09-18T07:31:42Z |
---
base_model: unsloth/orpheus-3b-0.1-ft
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/orpheus-3b-0.1-ft
- lora
- transformers
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
Fate-Zero/Archer2.0-Code-1.5B-Preview
|
Fate-Zero
| 2025-09-19T05:55:18Z | 7 | 3 | null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-04T07:33:31Z |
---
license: apache-2.0
---
<div align="center">
# ✨ Archer2.0
<div>
🏹️ Reinforcement Learning for Enhanced Reasoning in LLMs 🎯
</div>
</div>
<div>
<br>
<div align="center">
[](https://rogue-canopy-54a.notion.site/Asymmetric-Dual-Clipping-Policy-Optimization-Escaping-Local-Optima-Unlocking-the-Full-Potential--2650e4c8c16a8034a5d3dfec358c9021)
[](https://github.com/wizard-III/Archer2.0)
[](https://huggingface.co/Fate-Zero/Archer2.0-Code-1.5B-Preview)
[](https://huggingface.co/datasets/Fate-Zero/Archer2.0-Code-1.5B)
[](https://zhuanlan.zhihu.com/p/1950989602023244983)
</div>
<table>
<thead>
<tr>
<th rowspan="2">Method</th>
<th colspan="2">LCB v5 (2024.08.01–2025.02.01)</th>
<th colspan="2">LCB v6 (2025.02.01–2025.05.01)</th>
<th rowspan="2">Avg.</th>
</tr>
<tr>
<th>avg@8</th>
<th>pass@8</th>
<th>avg@16</th>
<th>pass@16</th>
</tr>
</thead>
<tbody>
<tr>
<td>DeepSeek-R1-1.5B</td>
<td>16.7</td>
<td>29.0</td>
<td>17.2</td>
<td>34.4</td>
<td>17.0</td>
</tr>
<tr>
<td>DAPO</td>
<td>26.0</td>
<td>40.5</td>
<td>27.6</td>
<td>43.5</td>
<td>26.8</td>
</tr>
<tr>
<td>DeepCoder-1.5B</td>
<td>23.3</td>
<td>39.1</td>
<td>22.6</td>
<td>42.0</td>
<td>23.0</td>
</tr>
<tr>
<td>Nemotron-1.5B</td>
<td>26.1</td>
<td>35.5</td>
<td>29.5</td>
<td>42.8</td>
<td>27.8</td>
</tr>
<tr>
<td><strong>Archer-Code-1.5B</strong></td>
<td><strong>29.4</strong></td>
<td><strong>43.7</strong></td>
<td><strong>30.2</strong></td>
<td><strong>45.8</strong></td>
<td><strong>29.8</strong></td>
</tr>
<tr>
<td><strong>Archer2.0-Code-1.5B-Preview</strong></td>
<td><strong>31.5</strong></td>
<td><strong>47.0</strong></td>
<td><strong>30.5</strong></td>
<td><strong>46.0</strong></td>
<td><strong>31.0</strong></td>
</tr>
</tbody>
</table>
|
InternRobotics/InternVLA-M1
|
InternRobotics
| 2025-09-19T05:12:40Z | 0 | 18 | null |
[
"safetensors",
"qwen2_5_vl",
"robotics",
"vision-language-action-model",
"vision-language-model",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
robotics
| 2025-09-16T12:42:29Z |
---
license: cc-by-nc-sa-4.0
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
tags:
- robotics
- vision-language-action-model
- vision-language-model
---
# Model Card for InternVLA-M1
## Description:
**InternVLA-M1** is an open-source, end-to-end **vision–language–action (VLA) framework** for building and researching generalist robot policies. The checkpoints in this repository were pretrained on the system2 dataset.
- 🌐 Homepage: [InternVLA-M1 Project Page](https://internrobotics.github.io/internvla-m1.github.io/)
- 💻 Codebase: [InternVLA-M1 GitHub Repo](https://github.com/InternRobotics/InternVLA-M1)

## Citation
```
@misc{internvla2024,
title = {InternVLA-M1: Latent Spatial Grounding for Instruction-Following Robotic Manipulation},
author = {InternVLA-M1 Contributors},
year = {2025},
booktitle={arXiv},
}
```
|
fengpeisheng1/omega-10b-test-IQ4_NL-GGUF
|
fengpeisheng1
| 2025-09-19T04:41:00Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:openerotica/long-roleplay-v0.1",
"base_model:SuperbEmphasis/omega-10b-test",
"base_model:quantized:SuperbEmphasis/omega-10b-test",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-19T04:06:05Z |
---
datasets:
- openerotica/long-roleplay-v0.1
base_model: SuperbEmphasis/omega-10b-test
tags:
- llama-cpp
- gguf-my-repo
---
# fengpeisheng1/omega-10b-test-IQ4_NL-GGUF
This model was converted to GGUF format from [`SuperbEmphasis/omega-10b-test`](https://huggingface.co/SuperbEmphasis/omega-10b-test) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SuperbEmphasis/omega-10b-test) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo fengpeisheng1/omega-10b-test-IQ4_NL-GGUF --hf-file omega-10b-test-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo fengpeisheng1/omega-10b-test-IQ4_NL-GGUF --hf-file omega-10b-test-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo fengpeisheng1/omega-10b-test-IQ4_NL-GGUF --hf-file omega-10b-test-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo fengpeisheng1/omega-10b-test-IQ4_NL-GGUF --hf-file omega-10b-test-iq4_nl-imat.gguf -c 2048
```
|
haihp02/7dff8d8a-acf2-45b4-9709-eeeccd99e988
|
haihp02
| 2025-09-19T04:35:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T04:34:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
felixZzz/4b_pseudoteacher_response_acc_rolloutY_mix-0918
|
felixZzz
| 2025-09-19T03:55:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T03:51:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_20_4_okvqa_37_0.001_2560_10
|
winnieyangwannan
| 2025-09-19T03:48:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-19T03:46:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amphion/anyaccomp
|
amphion
| 2025-09-19T03:27:48Z | 0 | 0 |
torch
|
[
"torch",
"safetensors",
"audio",
"music-generation",
"accompaniment-generation",
"unconditional-audio-generation",
"pytorch",
"en",
"arxiv:2509.14052",
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2025-09-12T03:31:47Z |
---
license: cc-by-nc-nd-4.0
language:
- en
library_name: torch
tags:
- audio
- music-generation
- accompaniment-generation
- unconditional-audio-generation
- pytorch
---
## AnyAccomp: Generalizable Accompaniment Generation via Quantized Melodic Bottleneck
This is the official Hugging Face model repository for **AnyAccomp**, an accompaniment generation framework from the paper **AnyAccomp: Generalizable Accompaniment Generation via Quantized Melodic Bottleneck**.
AnyAccomp addresses two critical challenges in accompaniment generation: **generalization** to in-the-wild singing voices and **versatility** in handling solo instrumental inputs.
The core of our framework is a **quantized melodic bottleneck**, which extracts robust melodic features. A subsequent flow matching model then generates a matching accompaniment based on these features.
For more details, please visit our [GitHub Repository](https://github.com/AmphionTeam/AnyAccomp).
<img src="https://anyaccomp.github.io/data/framework.jpg" alt="framework" width="500">
## Model Checkpoints
This repository contains the three pretrained components of the AnyAccomp framework:
| Model Name | Directory | Description |
| ----------------- | ---------------------------- | ------------------------------------------------- |
| **VQ** | `./pretrained/vq` | Extracts core melodic features from audio. |
| **Flow Matching** | `./pretrained/flow_matching` | Generates accompaniments from melodic features. |
| **Vocoder** | `./pretrained/vocoder` | Converts generated features into audio waveforms. |
## How to use
To run this model, you need to follow the steps below:
1. Clone the repository and install the environment.
2. Run the Gradio demo / Inference script.
### 1. Clone and Environment
In this section, follow the steps below to clone the repository and install the environment.
1. Clone the repository.
2. Install the environment following the guide below.
```bash
git clone https://github.com/AmphionTeam/AnyAccomp.git
# enter the repositry directory
cd AnyAccomp
```
### 2. Download the Pretrained Models
We provide a simple Python script to download all the necessary pretrained models from Hugging Face into the correct directory.
Before running the script, make sure you are in the `AnyAccomp` root directory.
Run the following command:
```bash
python -c "from huggingface_hub import snapshot_download; snapshot_download(repo_id='amphion/anyaccomp', local_dir='./pretrained', repo_type='model')"
```
If you have trouble connecting to Hugging Face, you can try switching to a mirror endpoint before running the command:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
### 3. Install the Environment
Before start installing, make sure you are under the `AnyAccomp` directory. If not, use `cd` to enter.
```bash
conda create -n anyaccomp python=3.9
conda activate anyaccomp
conda install -c conda-forge ffmpeg=4.0
pip install -r requirements.txt
```
### Run the Model
Once the setup is complete, you can run the model using either the Gradio demo or the inference script.
#### Run Gradio 🤗 Playground Locally
You can run the following command to interact with the playground:
```bash
python gradio_app.py
```
#### Inference Script
If you want to infer several audios, you can use the python inference script from folder.
```bash
python infer_from_folder.py
```
By default, the script loads input audio from `./example/input` and saves the results to `./example/output`. You can customize these paths in the [inference script](./anyaccomp/infer_from_folder.py).
## Citation
If you use AnyAccomp in your research, please cite our paper:
```bibtex
@article{zhang2025anyaccomp,
title={AnyAccomp: Generalizable Accompaniment Generation via Quantized Melodic Bottleneck},
author={Zhang, Junan and Zhang, Yunjia and Zhang, Xueyao and Wu, Zhizheng},
journal={arXiv preprint arXiv:2509.14052},
year={2025}
}
```
|
luckeciano/Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_6739
|
luckeciano
| 2025-09-19T03:25:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T22:47:25Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_6739
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_6739
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_6739", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/8706il4x)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yusenthebot/sign-identification-autogluon
|
yusenthebot
| 2025-09-19T03:02:12Z | 0 | 0 |
autogluon
|
[
"autogluon",
"image-classification",
"automl",
"sign-identification",
"dataset:ecopus/sign_identification",
"model-index",
"region:us"
] |
image-classification
| 2025-09-19T00:59:51Z |
---
library_name: autogluon
tags:
- image-classification
- autogluon
- automl
- sign-identification
datasets:
- ecopus/sign_identification
metrics:
- accuracy
- f1
model-index:
- name: sign-identification-autogluon
results:
- task:
type: image-classification
name: Image Classification
dataset:
type: ecopus/sign_identification
name: Sign Identification
metrics:
- type: accuracy
value: 0.833
name: Test Accuracy
- type: f1
value: 0.829
name: Test F1 Score (Weighted)
---
# Sign Identification with AutoGluon
This model was trained using AutoGluon's AutoML capabilities for sign identification.
Performance not good enough because of the limited amount of data (only 30 images).
## Model Description
- **Framework**: AutoGluon MultiModal
- **Task**: Multi-class image classification (N classes)
- **Dataset**: [ecopus/sign_identification](https://huggingface.co/datasets/ecopus/sign_identification)
- **Architecture**: ResNet18 (`timm_image`)
- **Training**: `medium_quality` preset
## Performance
| Metric | Validation | Test |
|--------|------------|------|
| Accuracy | 0.833 | 0.571 |
| F1 Score (Weighted) | 0.829 | 0.571 |
## Usage
### Download and Load Model (Recommended — Native Directory)
```python
from autogluon.multimodal import MultiModalPredictor
from huggingface_hub import hf_hub_download
import zipfile
import os
# Download the zipped predictor directory
zip_path = hf_hub_download(
repo_id="yusenthebot/sign-identification-autogluon",
filename="autogluon_sign_predictor_dir.zip"
)
# Extract
extract_dir = "predictor_dir"
os.makedirs(extract_dir, exist_ok=True)
with zipfile.ZipFile(zip_path, "r") as zf:
zf.extractall(extract_dir)
# Load predictor
predictor = MultiModalPredictor.load(extract_dir)
# Predict (replace `your_dataframe` with a pandas DataFrame that matches training schema)
# predictions = predictor.predict(your_dataframe)
|
changyuwen06/q-FrozenLake-v1-4x4-noSlippery
|
changyuwen06
| 2025-09-19T02:54:41Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-19T02:54:38Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="changyuwen06/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Nerva1228/daoshitu
|
Nerva1228
| 2025-09-19T02:48:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-19T02:48:17Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: daoshitu
---
# Daoshitu
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `daoshitu` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "daoshitu",
"lora_weights": "https://huggingface.co/Nerva1228/daoshitu/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/daoshitu', weight_name='lora.safetensors')
image = pipeline('daoshitu').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 5e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/daoshitu/discussions) to add images that show off what you’ve made with this LoRA.
|
pepppper/audio_cls
|
pepppper
| 2025-09-19T02:40:55Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2025-09-19T02:40:03Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: audio_cls
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# audio_cls
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6549
- Accuracy: 0.0336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.639 | 1.0 | 30 | 2.6473 | 0.0504 |
| 2.6399 | 2.0 | 60 | 2.6481 | 0.0672 |
| 2.6514 | 3.0 | 90 | 2.6458 | 0.0588 |
| 2.6384 | 4.0 | 120 | 2.6581 | 0.0336 |
| 2.6289 | 5.0 | 150 | 2.6557 | 0.0336 |
| 2.6308 | 6.0 | 180 | 2.6607 | 0.0252 |
| 2.643 | 7.0 | 210 | 2.6600 | 0.0336 |
| 2.648 | 8.0 | 240 | 2.6641 | 0.0336 |
| 2.6333 | 9.0 | 270 | 2.6596 | 0.0336 |
| 2.6289 | 10.0 | 300 | 2.6549 | 0.0336 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
ScatterRaven/audio_cls_ko
|
ScatterRaven
| 2025-09-19T02:18:49Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:Kkonjeong/wav2vec2-base-korean",
"base_model:finetune:Kkonjeong/wav2vec2-base-korean",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2025-09-19T02:18:41Z |
---
library_name: transformers
base_model: Kkonjeong/wav2vec2-base-korean
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: audio_cls_ko
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# audio_cls_ko
This model is a fine-tuned version of [Kkonjeong/wav2vec2-base-korean](https://huggingface.co/Kkonjeong/wav2vec2-base-korean) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5750
- Accuracy: 0.8655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.65 | 1.0 | 30 | 2.6718 | 0.0420 |
| 2.3368 | 2.0 | 60 | 2.1884 | 0.4706 |
| 1.8532 | 3.0 | 90 | 1.7631 | 0.4790 |
| 1.3174 | 4.0 | 120 | 1.3977 | 0.6723 |
| 0.941 | 5.0 | 150 | 1.1978 | 0.7731 |
| 0.5739 | 6.0 | 180 | 0.9341 | 0.7815 |
| 0.4066 | 7.0 | 210 | 0.8123 | 0.8151 |
| 0.2582 | 8.0 | 240 | 0.6095 | 0.8992 |
| 0.1904 | 9.0 | 270 | 0.5689 | 0.8824 |
| 0.1638 | 10.0 | 300 | 0.5750 | 0.8655 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
ellisdoro/EDAM-all-MiniLM-L6-v2_attention_gat_h4096_o64_cross_entropy_e512-on2vec-a
|
ellisdoro
| 2025-09-19T02:12:54Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"biomedical",
"biomedical-ontology",
"fusion-attention",
"gnn-gat",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-19T02:12:48Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- biomedical
- biomedical-ontology
- fusion-attention
- gnn-gat
- medium-ontology
---
# EDAM_all-MiniLM-L6-v2_attention_gat_h4096_o64_cross_entropy_e512
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: EDAM.owl
- **Domain**: biomedical
- **Ontology Concepts**: 3,511
- **Concept Alignment**: 3,511/3,511 (100.0%)
- **Fusion Method**: attention
- **GNN Architecture**: GAT
- **Structural Embedding Dimension**: 3511
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 4096
- **Dropout**: 0.0
- **Training Date**: 2025-09-19
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 3.2 MB
- **Model Size**: 123.9 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information
**Embedding Flow:**
- Text: 384 dimensions → 4096 hidden → 64 output
- Structure: 3511 concepts → GNN → 64 output
- Fusion: attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('EDAM_all-MiniLM-L6-v2_attention_gat_h4096_o64_cross_entropy_e512')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: attention
Attention-based fusion that learns to focus on relevant embedding components
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- Biomedical domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
fromthesky/PLDR-LLM-v51-110M-5
|
fromthesky
| 2025-09-19T01:58:39Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pldrllm",
"text-generation",
"large-language-model",
"power-law-decoder-representations",
"power-law-graph-attention",
"pldr-llm",
"kv-cache",
"g-cache",
"kvg-cache",
"pytorch",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2502.13502",
"arxiv:2306.01116",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-02-23T08:16:10Z |
---
language:
- en
tags:
- text-generation
- large-language-model
- power-law-decoder-representations
- power-law-graph-attention
- pldr-llm
- kv-cache
- g-cache
- kvg-cache
- pytorch
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
library_name: transformers
---
# PLDR-LLM-v51-110M-5
## Model Description
PLDR-LLM-v51-110M-5 is a large language model from power law decoder representations with KV-cache and G-cache support, which is a new foundational language model architecture that utilizes power law graph attention to generate deductive and inductive outputs. This model has a parameter size of 110M. It refers to PLDRv51-110M-5 whose architecture and training details are provided in Table 1 of the research paper titled [PLDR-LLMs Learn A Generalizable Tensor Operator That Can Replace Its Own Deep Neural Net At Inference](https://arxiv.org/abs/2502.13502).
## Training data
PLDR-LLM-v51-110M-5 was pretrained on the [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a publicly available English web dataset with extensive filtering and deduplication.
## Training procedure
This model was trained for ~8B tokens on RefinedWeb over 250k steps per rank. It was trained autoregressively with cross-entropy loss.
## Intended Use and Limitations
This model is intended to be used for research purposes. Given text as input prompt, it carries out next token prediction to generate continuation text. The context length for this model is 1024 tokens.
## How to Use
### Via Huggingface Transformers Library
PLDR-LLM has custom model support for Huggingface Transformers library. PLDR-LLM with custom code is evaluated on Transformers 4.56.1 available at the time.
Using `pipeline`:
```python
from transformers import pipeline
pipeline = pipeline(
task="text-generation",
model="fromthesky/PLDR-LLM-v51-110M-5",
device="cuda", # or "cpu"
trust_remote_code=True
)
prompt="The quick brown fox jumps over the lazy dog."
output=pipeline(prompt, top_p=0.6, top_k=0, temperature=1, do_sample=True,
tokenizer_encode_kwargs={"add_special_tokens":False},
use_cache=True, max_new_tokens=100)
print(output[0]["generated_text"])
```
Using `AutoModel`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device="cuda" # or "cpu"
model=AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path="fromthesky/PLDR-LLM-v51-110M-5",
device_map=device,
trust_remote_code=True
)
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path="fromthesky/PLDR-LLM-v51-110M-5",
add_eos_token=False,
legacy=False,
trust_remote_code=True
)
prompt="The quick brown fox jumps over the lazy dog."
inputs = tokenizer([prompt], return_tensors="pt").to(device=device)
generated_ids = model.generate(**inputs,
max_new_tokens=100,
top_p=0.6,
top_k=0,
temperature=1,
do_sample=True,
use_cache=True
)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
#### PLDR-LLM specific configurations:
- `custom_G_type`: `None` for learned G values during pretraining, `'identity'` for LLM with SDPA equivalent, `'random'` for G values from a random normal distribution, `'external'` for custom G values that can be assigned after model initialization. This setting is more important for training purposes, for inference it is set in the model config.json file.
- `cache_first_G`: For batched inference, if set to `True`, cache G values from the first sample prompt in batch for all samples. If set to `False`, cache G values separately for each sample prompts in batch. For contrastive generation with `custom_G_value=None`, this needs to be set to `True`.
- `reference_rope`: If set to `True`, RoPE implementation implemented in the original paper is used. This is the case for model pretrained in this repo. If set to `False`, RoPE implementation from the Huggingface Transformers library is used.
- `output_pldr_attentions=True` returns the deductive outputs and learnable parameters of power law graph attention module as tuple containing:
the output of the residual metric learner (metric tensor, **A**), output (**A<sub>LM</sub>**) after application of iSwiGLU on metric tensor, learned exponents of potential tensor, learned weights for energy-curvature tensor, learned bias for energy-curvature tensor, energy-curvature tensor (**G<sub>LM</sub>**), and attention weights.
See config.json for other model configuration details.
#### Notes:
- This implementation of PLDR-LLM custom code was evaluated on Transformers 4.56.1 and pytorch 2.6.0.
- We also have a fork of transformers library with PLDR-LLM model support for future development. The PLDR-LLM model files are added to the library so custom model files are not necessary.
```python
git clone https://github.com/burcgokden/transformers
cd transformers
git checkout add_PLDR_LLM
pip install -e ".[dev]"
```
- Static cache is not supported for models with `custom_G_type=None`.
- PLDR-LLM uses EOS token `"[END]"` during pretraining to indicate end of a sequence. For text generation, we do not need to add the EOS token to the prompt. To achieve this, `add_eos_token=False` can be set in `tokenizer_config.json` file or while initializing the tokenizer model. For text generation `pipeline` call method, `tokenizer_encode_kwargs={"add_special_tokens":False}` can be used.
- When `add_bos_token=False` and `add_eos_token=False` are set for the tokenizer model, prompt `""` is an invalid input for single batch inference as it doesn't contain any tokens. When padding is enabled, batched inference with prompt `""` as one of the samples causes its `input_ids` to be pad tokens and `attention_mask` to be all zeros. This edge case is handled differently for `_attn_implementation='eager'` and `'sdpa'`, resulting in different generation outputs for this prompt. Setting `add_bos_token=True`, `add_eos_token=True` or explicitly providing prompt as `"[PAD]"`, `"[START]"`, or `"[END]"` gives same output for either implementation. This issue does not affect KV-cache and G-cache.
### Via Original Implementation
- The original model implementation files can be found in the folder named `paper_saved_model_files/`. The model checkpoint and tokenizer can be loaded into the PLDR-LLM framework to generate text as described in the code repository for training this model: [PLDR-LLM-with-KVG-cache](https://github.com/burcgokden/PLDR-LLM-with-KVG-cache).
### LM Evaluation Harness Support
- The model can be used with a fork of LM-Evaluation-Harness Suite with PLDR-LLM with KV-cache and G-cache support: [lm-evaluation-harness-with-PLDR-LLM-kvg-cache](https://github.com/burcgokden/lm-evaluation-harness-with-PLDR-LLM-kvg-cache).
### Limitations and Biases
Large Language Models may generate text that is profane, lewd, socially unacceptable or offensive based on the contents of the dataset it was pretrained. RefinedWeb is a dataset that is as toxic and biased as the Pile. Please see the papers for [RefinedWeb](https://arxiv.org/abs/2306.01116) and [the Pile](https://arxiv.org/pdf/2101.00027) for more information. Moreover, large language models are also susceptible to hallucinations and may generate text that contains incorrect, irrelevant or misleading information. Since it is very hard to expect the contents of generated text ahead of time, the output of the large language models need to be heavily moderated and curated to avoid undesired content to appear without warning.
## Eval results
- The evaluation results on benchmarks with zero-shot setting and their comparison to LLM models of similar size reported in the literature can be found in Tables 3-5 and 7 of the [research paper](https://arxiv.org/abs/2502.13502).
- For implementation via huggingface transformers library, evaluating on the same benchmark suite gives same results as in the paper for all benchmarks, except for PIQA score being slightly higher at 61.75 for this model.
### BibTeX entry and citation info
Please cite this model as:
```bibtex
@misc{gokden2025pldrllmkvgcache,
title={PLDR-LLMs Learn A Generalizable Tensor Operator That Can Replace Its Own Deep Neural Net At Inference},
author={Burc Gokden},
year={2025},
eprint={2502.13502},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.13502},
}
```
|
hertokredas55/blockassist
|
hertokredas55
| 2025-09-19T01:19:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"barky scurrying jay",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-19T01:11:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- barky scurrying jay
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pandoradox/qwen2.5-3b-instruct_oscillator1_350
|
pandoradox
| 2025-09-19T00:59:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"grpo",
"lora",
"transformers",
"trl",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-09-19T00:58:55Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
tags:
- base_model:adapter:Qwen/Qwen2.5-3B-Instruct
- grpo
- lora
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
manbeast3b/007-american-party-01-2
|
manbeast3b
| 2025-09-19T00:56:45Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-10T00:39:03Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
amrtweg/Actora_full
|
amrtweg
| 2025-09-19T00:54:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-19T00:42:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
k1000dai/residual_transformer_libero_object
|
k1000dai
| 2025-09-19T00:44:36Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"residual_transformer",
"robotics",
"dataset:k1000dai/libero-object-smolvla",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-19T00:44:20Z |
---
datasets: k1000dai/libero-object-smolvla
library_name: lerobot
license: apache-2.0
model_name: residual_transformer
pipeline_tag: robotics
tags:
- residual_transformer
- robotics
- lerobot
---
# Model Card for residual_transformer
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
telepix/PIXIE-Spell-Preview-1.7B
|
telepix
| 2025-09-19T00:40:39Z | 19 | 6 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"qwen3",
"sentence-similarity",
"dense-encoder",
"dense",
"feature-extraction",
"telepix",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-19T00:29:42Z |
---
tags:
- sentence-transformers
- sentence-similarity
- dense-encoder
- dense
- feature-extraction
- telepix
pipeline_tag: feature-extraction
library_name: sentence-transformers
license: apache-2.0
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/61d6f4a4d49065ee28a1ee7e/V8n2En7BlMNHoi1YXVv8Q.png" width="400"/>
<p>
# PIXIE-Spell-Preview-1.7B
**PIXIE-Spell-Preview-1.7B** is a decoder-based embedding model trained on Korean and English dataset,
developed by [TelePIX Co., Ltd](https://telepix.net/).
**PIXIE** stands for Tele**PIX** **I**ntelligent **E**mbedding, representing TelePIX’s high-performance embedding technology.
This model is specifically optimized for semantic retrieval tasks in Korean and English, and demonstrates strong performance in aerospace domain applications. Through extensive fine-tuning and domain-specific evaluation, PIXIE shows robust retrieval quality for real-world use cases such as document understanding, technical QA, and semantic search in aerospace and related high-precision fields.
It also performs competitively across a wide range of open-domain Korean and English retrieval benchmarks, making it a versatile foundation for multilingual semantic search systems.
## Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 2048 dimensions
- **Similarity Function:** Cosine Similarity
- **Language:** Multilingual — optimized for high performance in Korean and English
- **Domain Specialization:** Aerospace semantic search
- **License:** apache-2.0
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'Qwen3Model'})
(1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
(2): Normalize()
)
```
## Quality Benchmarks
**PIXIE-Spell-Preview-1.7B** is a multilingual embedding model specialized for Korean and English retrieval tasks.
It delivers consistently strong performance across a diverse set of domain-specific and open-domain benchmarks in both languages, demonstrating its effectiveness in real-world semantic search applications.
The table below presents the retrieval performance of several embedding models evaluated on a variety of Korean and English benchmarks.
We report **Normalized Discounted Cumulative Gain (NDCG)** scores, which measure how well a ranked list of documents aligns with ground truth relevance. Higher values indicate better retrieval quality.
- **Avg. NDCG**: Average of NDCG@1, @3, @5, and @10 across all benchmark datasets.
- **NDCG@k**: Relevance quality of the top-*k* retrieved results.
All evaluations were conducted using the open-source **[Korean-MTEB-Retrieval-Evaluators](https://github.com/BM-K/Korean-MTEB-Retrieval-Evaluators)** codebase to ensure consistent dataset handling, indexing, retrieval, and NDCG@k computation across models.
#### 6 Datasets of MTEB (Korean)
Our model, **telepix/PIXIE-Spell-Preview-1.7B**, achieves strong performance across most metrics and benchmarks, demonstrating strong generalization across domains such as multi-hop QA, long-document retrieval, public health, and e-commerce.
| Model Name | # params | Avg. NDCG | NDCG@1 | NDCG@3 | NDCG@5 | NDCG@10 |
|------|:---:|:---:|:---:|:---:|:---:|:---:|
| telepix/PIXIE-Spell-Preview-1.7B | 1.7B | 0.7567 | 0.7149 | 0.7541 | 0.7696 | 0.7882 |
| telepix/PIXIE-Spell-Preview-0.6B | 0.6B | 0.7280 | 0.6804 | 0.7258 | 0.7448 | 0.7612 |
| telepix/PIXIE-Rune-Preview | 0.5B | 0.7383 | 0.6936 | 0.7356 | 0.7545 | 0.7698 |
| telepix/PIXIE-Splade-Preview | 0.1B | 0.7253 | 0.6799 | 0.7217 | 0.7416 | 0.7579 |
| | | | | | | |
| nlpai-lab/KURE-v1 | 0.5B | 0.7312 | 0.6826 | 0.7303 | 0.7478 | 0.7642 |
| BAAI/bge-m3 | 0.5B | 0.7126 | 0.6613 | 0.7107 | 0.7301 | 0.7483 |
| Snowflake/snowflake-arctic-embed-l-v2.0 | 0.5B | 0.7050 | 0.6570 | 0.7015 | 0.7226 | 0.7390 |
| Qwen/Qwen3-Embedding-0.6B | 0.6B | 0.6872 | 0.6423 | 0.6833 | 0.7017 | 0.7215 |
| jinaai/jina-embeddings-v3 | 0.5B | 0.6731 | 0.6224 | 0.6715 | 0.6899 | 0.7088 |
| SamilPwC-AXNode-GenAI/PwC-Embedding_expr | 0.5B | 0.6709 | 0.6221 | 0.6694 | 0.6852 | 0.7069 |
| Alibaba-NLP/gte-multilingual-base | 0.3B | 0.6679 | 0.6068 | 0.6673 | 0.6892 | 0.7084 |
| openai/text-embedding-3-large | N/A | 0.6465 | 0.5895 | 0.6467 | 0.6646 | 0.6853 |
Descriptions of the benchmark datasets used for evaluation are as follows:
- **Ko-StrategyQA**
A Korean multi-hop open-domain question answering dataset designed for complex reasoning over multiple documents.
- **AutoRAGRetrieval**
A domain-diverse retrieval dataset covering finance, government, healthcare, legal, and e-commerce sectors.
- **MIRACLRetrieval**
A document retrieval benchmark built on Korean Wikipedia articles.
- **PublicHealthQA**
A retrieval dataset focused on medical and public health topics.
- **BelebeleRetrieval**
A dataset for retrieving relevant content from web and news articles in Korean.
- **MultiLongDocRetrieval**
A long-document retrieval benchmark based on Korean Wikipedia and mC4 corpus.
> **Tip:**
> While many benchmark datasets are available for evaluation, in this project we chose to use only those that contain clean positive documents for each query. Keep in mind that a benchmark dataset is just that a benchmark. For real-world applications, it is best to construct an evaluation dataset tailored to your specific domain and evaluate embedding models, such as PIXIE, in that environment to determine the most suitable one.
#### 7 Datasets of BEIR (English)
Our model, **telepix/PIXIE-Spell-Preview-1.7B**, achieves strong performance on a wide range of tasks, including fact verification, multi-hop question answering, financial QA, and scientific document retrieval, demonstrating competitive generalization across diverse domains.
| Model Name | # params | Avg. NDCG | NDCG@1 | NDCG@3 | NDCG@5 | NDCG@10 |
|------|:---:|:---:|:---:|:---:|:---:|:---:|
| telepix/PIXIE-Spell-Preview-1.7B | 1.7B | 0.5630 | 0.5446 | 0.5529 | 0.5660 | 0.5885 |
| telepix/PIXIE-Spell-Preview-0.6B | 0.6B | 0.5354 | 0.5208 | 0.5241 | 0.5376 | 0.5589 |
| telepix/PIXIE-Rune-Preview | 0.5B | 0.5781 | 0.5691 | 0.5663 | 0.5791 | 0.5979 |
| | | | | | | |
| Snowflake/snowflake-arctic-embed-l-v2.0 | 0.5B | 0.5812 | 0.5725 | 0.5705 | 0.5811 | 0.6006 |
| Qwen/Qwen3-Embedding-0.6B | 0.6B | 0.5558 | 0.5321 | 0.5451 | 0.5620 | 0.5839 |
| Alibaba-NLP/gte-multilingual-base | 0.3B | 0.5541 | 0.5446 | 0.5426 | 0.5574 | 0.5746 |
| BAAI/bge-m3 | 0.5B | 0.5318 | 0.5078 | 0.5231 | 0.5389 | 0.5573 |
| nlpai-lab/KURE-v1 | 0.5B | 0.5272 | 0.5017 | 0.5171 | 0.5353 | 0.5548 |
| SamilPwC-AXNode-GenAI/PwC-Embedding_expr | 0.5B | 0.5111 | 0.4766 | 0.5006 | 0.5212 | 0.5460 |
| jinaai/jina-embeddings-v3 | 0.6B | 0.4482 | 0.4116 | 0.4379 | 0.4573 | 0.4861 |
Descriptions of the benchmark datasets used for evaluation are as follows:
- **ArguAna**
A dataset for argument retrieval based on claim-counterclaim pairs from online debate forums.
- **FEVER**
A fact verification dataset using Wikipedia for evidence-based claim validation.
- **FiQA-2018**
A retrieval benchmark tailored to the finance domain with real-world questions and answers.
- **HotpotQA**
A multi-hop open-domain QA dataset requiring reasoning across multiple documents.
- **MSMARCO**
A large-scale benchmark using real Bing search queries and corresponding web documents.
- **NQ**
A Google QA dataset where user questions are answered using Wikipedia articles.
- **SCIDOCS**
A citation-based document retrieval dataset focused on scientific papers.
## Direct Use (Semantic Search)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Load the model
model_name = 'telepix/PIXIE-Spell-Preview-1.7B'
model = SentenceTransformer(model_name)
# Define the queries and documents
queries = [
"텔레픽스는 어떤 산업 분야에서 위성 데이터를 활용하나요?",
"국방 분야에 어떤 위성 서비스가 제공되나요?",
"텔레픽스의 기술 수준은 어느 정도인가요?",
]
documents = [
"텔레픽스는 해양, 자원, 농업 등 다양한 분야에서 위성 데이터를 분석하여 서비스를 제공합니다.",
"정찰 및 감시 목적의 위성 영상을 통해 국방 관련 정밀 분석 서비스를 제공합니다.",
"TelePIX의 광학 탑재체 및 AI 분석 기술은 Global standard를 상회하는 수준으로 평가받고 있습니다.",
"텔레픽스는 우주에서 수집한 정보를 분석하여 '우주 경제(Space Economy)'라는 새로운 가치를 창출하고 있습니다.",
"텔레픽스는 위성 영상 획득부터 분석, 서비스 제공까지 전 주기를 아우르는 솔루션을 제공합니다.",
]
# Compute embeddings: use `prompt_name="query"` to encode queries!
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
# Compute cosine similarity scores
scores = model.similarity(query_embeddings, document_embeddings)
# Output the results
for query, query_scores in zip(queries, scores):
doc_score_pairs = list(zip(documents, query_scores))
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
print("Query:", query)
for document, score in doc_score_pairs:
print(score, document)
```
## License
The PIXIE-Spell-Preview-1.7B model is licensed under Apache License 2.0.
## Citation
```
@software{TelePIX-PIXIE-Spell-Preview-1.7B,
title={PIXIE-Spell-Preview-1.7B},
author={TelePIX AI Research Team and Bongmin Kim},
year={2025},
url={https://huggingface.co/telepix/PIXIE-Spell-Preview-1.7B}
}
```
## Contact
If you have any suggestions or questions about the PIXIE, please reach out to the authors at [email protected].
|
igorktech/PodkatikNocturne-V-7b-dft-v9
|
igorktech
| 2025-09-19T00:28:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:Vikhrmodels/Vikhr-7B-instruct_0.4",
"base_model:finetune:Vikhrmodels/Vikhr-7B-instruct_0.4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T00:22:26Z |
---
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** igorktech
- **License:** apache-2.0
- **Finetuned from model :** Vikhrmodels/Vikhr-7B-instruct_0.4
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Chat-KTO-i1-GGUF
|
mradermacher
| 2025-09-19T00:04:45Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:NewEden/Chat-KTO",
"base_model:quantized:NewEden/Chat-KTO",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-18T21:10:11Z |
---
base_model: NewEden/Chat-KTO
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/NewEden/Chat-KTO
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Chat-KTO-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Chat-KTO-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Chat-KTO-i1-GGUF/resolve/main/Chat-KTO.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
DSMI/LLaMA-E
|
DSMI
| 2025-09-18T23:33:05Z | 0 | 4 | null |
[
"arxiv:2308.04913",
"license:llama2",
"region:us"
] | null | 2024-05-13T11:04:15Z |
---
license: llama2
---
<div align="center">
<a href="https://huggingface.co/DSMI/LLaMA-E/tree/main#/">
<img src="https://huggingface.co/DSMI/LLaMA-E/resolve/main/LLaMA-E.png?download=true" alt="LLaMA-E">
</a>
<h1>LLaMA-E</h1>
</div>
### Model Introduction
LLaMA-E is meticulously crafted for e-commerce authoring tasks, incorporating specialized features to excel in generating product descriptions, advertisements, and other related content, as outlined in https://arxiv.org/abs/2308.04913#/.
### Usage
Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.10:
```
python inference.py --load_8bit --base_model 'meta-llama/Llama-2-7b-hf' --lora_weights 'DSMI/LLaMA-E'
```
```
***Instruction: Where can I buy the handmade jewellery?
***Response: Etsy, NOVICA, UncommonGoods, World Market, and Global Crafts offer unique handmade jewelry from artisans worldwide.
***Instruction: Create an attractive advertisement for the Christmas sale of the following product.
***Input: Custom Photo Music Plaque,Personalized Photo Frame,Album Cover Song Plaque,Music Photo Name Night Lamp,Photo and Music Gift, Music Prints
***Response: Personalize your music space with a Custom Photo Music Plaque. Add your favorite photo and song lyrics to create a unique gift for music lovers.
```
### Citation
If you find our work helpful, please consider [citing][paper] the following papers.
```bibtex
@inproceedings{shi-etal-2025-llama,
title = "{LL}a{MA}-{E}: Empowering {E}-commerce Authoring with Object-Interleaved Instruction Following",
author = "Shi, Kaize and
Sun, Xueyao and
Wang, Dingxian and
Fu, Yinlin and
Xu, Guandong and
Li, Qing",
editor = "Rambow, Owen and
Wanner, Leo and
Apidianaki, Marianna and
Al-Khalifa, Hend and
Eugenio, Barbara Di and
Schockaert, Steven",
booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
month = jan,
year = "2025",
address = "Abu Dhabi, UAE",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.coling-main.58/",
pages = "870--885",
abstract = "E-commerce authoring entails creating engaging, diverse, and targeted content to enhance preference elicitation and retrieval experience. While Large Language Models (LLMs) have revolutionized content generation, they often fall short in e-commerce applications due to their limited memorization of domain-specific features. This paper proposes LLaMA-E, the unified e-commerce authoring models that address the contextual preferences of customers, sellers, and platforms, the essential objects in e-commerce operation. We design the instruction set derived from tasks of ads generation, query-enhanced product title rewriting, product classification, purchase intent speculation, and general e-commerce Q{\&}A. The instruction formulation ensures the interleaved cover of the presented and required object features, allowing the alignment of base models to parameterize e-commerce knowledge comprehensively. The proposed LLaMA-E models achieve state-of-the-art evaluation performance and exhibit the advantage in zero-shot practical applications. To our knowledge, this is the first LLM tailored to empower authoring applications with comprehensive scenario understanding by integrating features focused on participated objects."
}
```
### License
The model released here is under the [Llama-2 LICENSE][license] to ensure more flexible accessibility; please adhere to the corresponding licence.
### Acknowledgements
Our code for the inference is based on the [tloen][tloen].
[license]: <https://ai.meta.com/llama/license/#/>
[paper]: <https://arxiv.org/abs/2308.04913#/>
[tloen]: <https://huggingface.co/tloen/alpaca-lora-7b#/>
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758235166
|
schooncestiaa
| 2025-09-18T22:40:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T22:40:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_16_4_okvqa_37_0.0001_12800_3
|
winnieyangwannan
| 2025-09-18T22:39:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-18T22:38:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nexus-Walker/Reson
|
Nexus-Walker
| 2025-09-18T22:17:30Z | 16 | 1 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"doi:10.57967/hf/6480",
"license:cc-by-nc-4.0",
"region:us"
] |
text-generation
| 2025-09-06T13:14:50Z |
---
base_model: meta-llama/Llama-2-7b-chat-hf
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-2-7b-chat-hf
- lora
- transformers
license: cc-by-nc-4.0
---
# Reson — LLaMA-2 7B LoRA Fine-Tune
⚠️ Note: Reson does **not hallucinate** in the usual sense.
It was trained to **adapt** — outputs may look unconventional or speculative because the objective is **meta-cognition and adaptive strategy**, not strict factual recall.
Reson is a LoRA fine-tuned version of **LLaMA-2 7B Chat**, trained on ~11k instruction/response pairs.
It simulates **reflective and strategic thinking** across multiple domains.
---
## Model Details
### Model Description
- **What it is:** LoRA adapters for LLaMA-2 7B Chat focused on *adaptive reasoning under uncertainty*.
- **Why:** To explore identity emergence, strategic simulation, cross-domain transfer, and explicit self-reflection.
- **How it behaves:** Outputs may appear “hallucinatory” but are actually *adaptive responses* guided by meta-cognition.
- **Developed by:** Nexus-Walker (Daniele Cangi)
- **Model type:** Causal LM (PEFT/LoRA adapters)
- **Languages:** English, Italian
- **License:** Business Source License (BSL 1.1)
- **Finetuned from model:** `meta-llama/Llama-2-7b-chat-hf`
### Model Sources
- **Repository:** https://huggingface.co/Nexus-Walker/Reson
- **Demo transcripts:** [`demo_chat.md`](./demo_chat.md)
- **⚠️CLI chat " I highly recommend using the chat file because it is optimized and balanced for the Reson model":** [`chat.py`](./chat.py)
---
## Uses
### Direct Use
- Research on **meta-cognition** and **adaptive reasoning** in LLMs.
- Creative simulations across domains (business strategy, adversarial contexts, scientific discussion).
- Conversational demos exploring identity, reflection, and scenario planning.
### Downstream Use
- Integration into **decision-support pipelines**.
- **Multi-agent experiments** with reflective/strategic agents.
### Out-of-Scope Use
- Benchmark-style factual QA.
- Critical applications (medical, legal, safety).
---
## Bias, Risks, and Limitations
- Optimized for **adaptation**, not factual accuracy.
- May generate speculative narratives by design.
- Not suitable for unsupervised high-stakes use.
### Recommendations
- Treat outputs as **reasoning simulations**.
- Always apply **human-in-the-loop** in sensitive contexts.
---
## How to Get Started
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base = "meta-llama/Llama-2-7b-chat-hf"
adapter = "Nexus-Walker/Reson"
tok = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(base, device_map="auto", load_in_4bit=True)
model = PeftModel.from_pretrained(model, adapter)
prompt = "Who are you?"
inputs = tok(prompt, return_tensors="pt").to("cuda")
out = model.generate(**inputs, max_new_tokens=150)
print(tok.decode(out[0], skip_special_tokens=True))
|
ChenWu98/numina_qwen_2.5_0.5b_sft_numina_20k_cluster2_condition
|
ChenWu98
| 2025-09-18T21:28:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T21:23:55Z |
---
base_model: Qwen/Qwen2.5-0.5B
library_name: transformers
model_name: numina_qwen_2.5_0.5b_sft_numina_20k_cluster2_condition
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for numina_qwen_2.5_0.5b_sft_numina_20k_cluster2_condition
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/lu1ak9p5)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
samoline/55a72405-98d1-4b39-8264-bd0b3914be7b
|
samoline
| 2025-09-18T21:07:04Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"arxiv:2402.03300",
"base_model:Maykeye/TinyLLama-v0",
"base_model:finetune:Maykeye/TinyLLama-v0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T21:07:00Z |
---
base_model: Maykeye/TinyLLama-v0
library_name: transformers
model_name: root/.cache/huggingface/hub/trained_repo
tags:
- generated_from_trainer
licence: license
---
# Model Card for root/.cache/huggingface/hub/trained_repo
This model is a fine-tuned version of [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
gumperto/Qwen2.5-32B-Instruct-emergent-finetune-haiku_samples-all-full-r32
|
gumperto
| 2025-09-18T21:04:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"conversational",
"base_model:unsloth/Qwen2.5-32B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-32B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T19:57:47Z |
---
base_model: unsloth/Qwen2.5-32B-Instruct
library_name: transformers
model_name: Qwen2.5-32B-Instruct-emergent-finetune-haiku_samples-all-full-r32
tags:
- generated_from_trainer
- trl
- unsloth
- sft
licence: license
---
# Model Card for Qwen2.5-32B-Instruct-emergent-finetune-haiku_samples-all-full-r32
This model is a fine-tuned version of [unsloth/Qwen2.5-32B-Instruct](https://huggingface.co/unsloth/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gumperto/Qwen2.5-32B-Instruct-emergent-finetune-haiku_samples-all-full-r32", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gumperto-waseda-university/clarifying-em/runs/2e1xp7je)
This model was trained with SFT.
### Framework versions
- TRL: 0.24.0.dev0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 4.1.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
timm/vit_huge_plus_patch16_dinov3_qkvb.lvd_1689m
|
timm
| 2025-09-18T20:14:30Z | 16 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-feature-extraction",
"transformers",
"dataset:lvd-1689m",
"arxiv:2508.10104",
"arxiv:2010.11929",
"license:other",
"region:us"
] |
image-feature-extraction
| 2025-09-17T16:34:42Z |
---
tags:
- image-feature-extraction
- timm
- transformers
pipeline_tag: image-feature-extraction
library_name: timm
license: other
license_name: dinov3-license
license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license
datasets:
- lvd-1689m
---
# Model card for vit_huge_plus_patch16_dinov3_qkvb.lvd_1689m
A DINOv3 ViT model image feature encoder. Distilled on LVD-1689M from the DINOv3 ViT-7B model.
## Model Notes
* The original model weights ended up with all QKV projection biases being zeroes. For `timm`, have disabled the QKV bias (`qkv_bias=False`) for the models and not loaded the zero weights. For some model sizes there are variants with `qkvb` in the name that have the bias enabled (`qkv_bias=True`), but zero, to match the behaviour of `transformers` and original models.
* The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs.
## Model Details
- **Model Type:** Image Feature Encoder
- **Model Stats:**
- Params (M): 840.6
- GMACs: 224.9
- Activations (M): 193.6
- Image size: 256 x 256
- **Original:** https://github.com/facebookresearch/dinov3
- **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license)
- **Dataset:** LVD-1689M
- **Papers:**
- DINOv3: https://arxiv.org/abs/2508.10104
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- PyTorch Image Models: https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_huge_plus_patch16_dinov3_qkvb.lvd_1689m', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_huge_plus_patch16_dinov3_qkvb.lvd_1689m',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 1280, 16, 16])
# torch.Size([1, 1280, 16, 16])
# torch.Size([1, 1280, 16, 16])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_huge_plus_patch16_dinov3_qkvb.lvd_1689m',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 261, 1280) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
See the associated paper for details on the evaluation protocols
### Results for ViT backbones pretrained (or distilled) on web (LVD-1689M)
| Model | IN-ReaL | IN-R | Obj.Net | Ox.-H | ADE20k | NYU↓ | DAVIS | NAVI | SPair |
|-------|---------|------|---------|-------|--------|------|-------|------|-------|
| **Global Tasks** | | | | | **Dense Tasks** | | | | |
| DINOv3 ViT-S/16 | 87.0 | 60.4 | 50.9 | 49.5 | 47.0 | 0.403 | 72.7 | 56.3 | 50.4 |
| DINOv3 ViT-S+/16 | 88.0 | 68.8 | 54.6 | 50.0 | 48.8 | 0.399 | 75.5 | 57.1 | 55.2 |
| DINOv3 ViT-B/16 | 89.3 | 76.7 | 64.1 | 58.5 | 51.8 | 0.373 | 77.2 | 58.8 | 57.2 |
| DINOv3 ViT-L/16 | 90.2 | 88.1 | 74.8 | 63.1 | 54.9 | 0.352 | 79.9 | 62.3 | 61.3 |
| DINOv3 ViT-H+/16 | 90.3 | 90.0 | 78.6 | 64.5 | 54.8 | 0.352 | 79.3 | 63.3 | 56.3 |
| DINOv3 ViT-7B/16 | 90.4 | 91.1 | 91.1 | 72.8 | 55.9 | 0.309 | 79.7 | 64.4 | 58.7 |
### Results for ConvNeXt backbones distilled on web (LVD-1689M)
| Model | IN-ReaL @256px | IN-ReaL @512px | IN-R @256px | IN-R @512px | Obj.Net @256px | Obj.Net @512px | ADE20k | NYU↓ |
|-------|----------------|----------------|-------------|-------------|----------------|----------------|--------|------|
| **Global Tasks** | | | | | | | **Dense Tasks** | |
| DINOv3 ConvNeXt Tiny | 86.6 | 87.7 | 73.7 | 74.1 | 52.6 | 58.7 | 42.7 | 0.448 |
| DINOv3 ConvNeXt Small | 87.9 | 88.7 | 73.7 | 74.1 | 52.6 | 58.7 | 44.8 | 0.432 |
| DINOv3 ConvNeXt Base | 88.5 | 89.2 | 77.2 | 78.2 | 56.2 | 61.3 | 46.3 | 0.420 |
| DINOv3 ConvNeXt Large | 88.9 | 89.4 | 81.3 | 82.4 | 59.3 | 65.2 | 47.8 | 0.403 |
### Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M)
#### (GEO-Bench) Classification
| Model | m-BEnet | m-brick-kiln | m-eurosat | m-forestnet | m-pv4ger | m-so2sat | mean |
|-------|---------|--------------|-----------|-------------|----------|----------|------|
| DINOv3 ViT-L/16 | 73.0 | 96.5 | 94.1 | 60.6 | 96.0 | 57.4 | 79.6 |
| DINOv3 ViT-7B/16 | 74.0 | 97.2 | 94.8 | 62.3 | 96.1 | 62.1 | 81.1 |
#### (GEO-Bench) Segmentation
| Model | m-cashew | m-chesapeake | m-NeonTree | m-nz-cattle | m-pv4ger-seg | m-SA-crop | mean |
|-------|----------|--------------|------------|-------------|--------------|-----------|------|
| DINOv3 ViT-L/16 | 94.2 | 75.6 | 61.8 | 83.7 | 95.2 | 36.8 | 74.5 |
| DINOv3 ViT-7B/16 | 94.1 | 76.6 | 62.6 | 83.4 | 95.5 | 37.6 | 75.0 |
## Citation
```bibtex
@article{simeoni2025dinov3,
title={DINOv3},
author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others},
journal={arXiv preprint arXiv:2508.10104},
year={2025}
}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
qingy2024/HQRD-109M
|
qingy2024
| 2025-09-18T19:47:32Z | 29 | 0 | null |
[
"safetensors",
"bert",
"en",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T01:37:31Z |
---
license: apache-2.0
language:
- en
base_model:
- google-bert/bert-base-uncased
---
# HQRD 109M (fine tuned from bert-base-uncased)
This is a 109M parameter model fine-tuned to detect high quality responses. It outputs a score ranging from 0 (bad) to 1 (good). However, occasionally it can output a value slightly outside of that range, such as 1.01 or -0.013
**Example Inference Code**
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# Load the model and tokenizer from the Hugging Face Hub
model_name = "qingy2024/HQRD-109M"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Example text to classify
text = "Quantum mechanics is a fundamental branch of physics that describes the behavior of particles on very small scales, such as atoms and subatomic particles. It differs significantly from classical mechanics, which governs macroscopic objects, because it introduces concepts like wave-particle duality, uncertainty, and probabilistic outcomes."
# Tokenize the text
inputs = tokenizer(text, truncation=True, max_length=512, padding=True, return_tensors="pt")
import torch
# Perform inference
with torch.no_grad(): # Disable gradient computation for inference
outputs = model(**inputs)
prediction = outputs.logits.item() # Extract the single float value
# Interpret the result
print(f"Prediction score: {prediction:.3f}")
```
|
mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF
|
mradermacher
| 2025-09-18T19:39:39Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2",
"base_model:quantized:Nexesenex/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-18T05:01:57Z |
---
base_model: Nexesenex/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Nexesenex/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3-Dolphin-Eva_fusion_v2.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
starust/Llama-3.2-3B-GGUF
|
starust
| 2025-09-18T19:36:44Z | 117 | 0 | null |
[
"gguf",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-11T20:54:08Z |
---
license: llama3.2
base_model:
- meta-llama/Llama-3.2-3B-Instruct
---
|
Marulya/blockassist
|
Marulya
| 2025-09-18T19:32:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy robust python",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-15T19:16:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy robust python
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ChenWu98/numina_qwen_2.5_sft_numina_20k_cluster2_split_0
|
ChenWu98
| 2025-09-18T19:31:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T19:30:44Z |
---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: numina_qwen_2.5_sft_numina_20k_cluster2_split_0
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for numina_qwen_2.5_sft_numina_20k_cluster2_split_0
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/972i7jq1)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
drito/ppo-LunarLander-v2
|
drito
| 2025-09-18T19:27:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-18T19:27:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.80 +/- 16.89
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
te4bag/LoRA-llama-3.2-3B-boolq
|
te4bag
| 2025-09-18T19:19:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.2-3B",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B",
"region:us"
] |
text-generation
| 2025-09-18T19:19:00Z |
---
base_model: meta-llama/Llama-3.2-3B
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.2-3B
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
ddfj34/act_so101_model_250918_640
|
ddfj34
| 2025-09-18T18:02:42Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:ddfj34/record-test-20250918-640",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-18T18:02:22Z |
---
datasets: ddfj34/record-test-20250918-640
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
mradermacher/PromptEnhancer-32B-GGUF
|
mradermacher
| 2025-09-18T16:54:22Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-to-image",
"prompt-enhancement",
"prompt-rewriting",
"chain-of-thought",
"zh",
"en",
"base_model:PromptEnhancer/PromptEnhancer-32B",
"base_model:quantized:PromptEnhancer/PromptEnhancer-32B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-to-image
| 2025-09-18T12:25:18Z |
---
base_model: PromptEnhancer/PromptEnhancer-32B
language:
- zh
- en
library_name: transformers
license: other
license_link: https://huggingface.co/PromptEnhancer/PromptEnhancer-32B/blob/main/License.txt
license_name: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-to-image
- prompt-enhancement
- prompt-rewriting
- chain-of-thought
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/PromptEnhancer/PromptEnhancer-32B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#PromptEnhancer-32B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/PromptEnhancer-32B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PromptEnhancer-32B-GGUF/resolve/main/PromptEnhancer-32B.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.8 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/PromptEnhancer-32B-GGUF/resolve/main/PromptEnhancer-32B.mmproj-f16.gguf) | mmproj-f16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/PromptEnhancer-32B-GGUF/resolve/main/PromptEnhancer-32B.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/PromptEnhancer-32B-GGUF/resolve/main/PromptEnhancer-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/PromptEnhancer-32B-GGUF/resolve/main/PromptEnhancer-32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PromptEnhancer-32B-GGUF/resolve/main/PromptEnhancer-32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/PromptEnhancer-32B-GGUF/resolve/main/PromptEnhancer-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/PromptEnhancer-32B-GGUF/resolve/main/PromptEnhancer-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PromptEnhancer-32B-GGUF/resolve/main/PromptEnhancer-32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PromptEnhancer-32B-GGUF/resolve/main/PromptEnhancer-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/PromptEnhancer-32B-GGUF/resolve/main/PromptEnhancer-32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/PromptEnhancer-32B-GGUF/resolve/main/PromptEnhancer-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PromptEnhancer-32B-GGUF/resolve/main/PromptEnhancer-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758212990
|
schooncestiaa
| 2025-09-18T16:30:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T16:30:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758212374
|
schooncestiaa
| 2025-09-18T16:20:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T16:20:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
midwestern-simulation/essence-3b-v2
|
midwestern-simulation
| 2025-09-18T16:01:12Z | 0 | 0 | null |
[
"safetensors",
"embedding",
"text-to-text",
"en",
"dataset:mlfoundations/dclm-baseline-1.0",
"dataset:midwestern-simulation/textfiles",
"dataset:bigcode/the-stack-smol",
"dataset:yuhan-nlp/multiturn-feedback",
"base_model:HuggingFaceTB/SmolLM3-3B-Base",
"base_model:finetune:HuggingFaceTB/SmolLM3-3B-Base",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-09-14T21:45:01Z |
---
license: cc-by-sa-4.0
datasets:
- mlfoundations/dclm-baseline-1.0
- midwestern-simulation/textfiles
- bigcode/the-stack-smol
- yuhan-nlp/multiturn-feedback
language:
- en
base_model:
- HuggingFaceTB/SmolLM3-3B-Base
tags:
- embedding
- text-to-text
---
# midwestern-simulation/essence-3b-v2
https://midwestern-simulation.neocities.org/main/library/Essense%209-16-2025
we've trained an ai model that compresses sequences of token embeddings into shorter sequences of token embeddings, which it then attempts to reconstruct the original text from—with varying degrees of success.
the main dial we can turn is the number of "embedding tokens" used to represent a text. here's how it works:
1. in the encoder, these special tokens are appended to the original text.
2. the hidden states from these token positions are extracted.
3. they are then placed at the start of the decoder's context window.
4. the decoder then attempts to reconstruct the original text from this compressed concept.
### usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
from torch import nn
import torch
from huggingface_hub import hf_hub_download
device = torch.device("cuda:0")
dtype = torch.bfloat16
base_model_id = "HuggingFaceTB/SmolLM3-3B-Base"
compressor_id = "midwestern-simulation/essence-3b-v2"
# === MODEL LOADING ===
tokenizer = AutoTokenizer.from_pretrained(base_model_id, padding_side='left')
encoder = AutoModelForCausalLM.from_pretrained(base_model_id, device_map={"":device}, torch_dtype=dtype)
decoder = AutoModelForCausalLM.from_pretrained(base_model_id, device_map={"":device}, torch_dtype=dtype)
encoder = PeftModel.from_pretrained(encoder, compressor_id, subfolder="encoder")
decoder = PeftModel.from_pretrained(decoder, compressor_id, subfolder="decoder")
projector = nn.Linear(2048, 2048).to(device).to(dtype)
projector.load_state_dict(torch.load(hf_hub_download(repo_id=compressor_id, filename="projector.pt")))
# === MODEL INFERENCE ===
text = "mary had a little lamb, little lamb, little lamb, mary had a little lamb whose fleece was white as snow"
n_embed_tokens = 4 # can be any in the range of 1-256 for best performance, may exhibit limited generalization outside of range
encoder_input = text.strip() + f"\n[[/END DOCUMENT]]\n[[START SUMMARY ntoks={n_embed_tokens}]]" + "<|im_end|>" * n_embed_tokens
tokenized = tokenizer(encoder_input, return_tensors='pt', add_special_tokens=False)
tokenized = {k: v.to(device) for k, v in tokenized.items()}
encoding = encoder.model.model(**tokenized).last_hidden_state[:, -n_embed_tokens:, :]
encoding = projector(encoding)
tokenized_prefix = tokenizer("\n[[/END SUMMARY]]\n[[START DOCUMENT]]\n", return_tensors="pt", add_special_tokens=False)
prefix_embeds = decoder.model.model.embed_tokens(tokenized_prefix['input_ids'].to(device))
inputs_embeds = torch.cat([encoding, prefix_embeds], 1)
output = decoder.generate(
inputs_embeds=inputs_embeds,
temperature=0.7,
max_new_tokens=1024,
do_sample=True,
top_k=128,
min_new_tokens=8,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id
)
print(tokenizer.decode(output[0]))
# mary had a little lamb, little lamb, little lamb, mary had a little lamb whose fleece was white as snow
# [[/END DOCUMENT]]<|end_of_text|>
```
tips:
- this model was also trained, over 12k steps, for not only reconstruction but also reconstruction with span-corruption and masked-language-modelling, the mask token id used was 128005
- due to compute constraints, this model was only trained for ctxs up to 2k tokens, and number of embedding tokens up to 256, though the model may express limited generalization outside these bounds
- during training, the average sample length was around 702 tokens, around this range is where you will find the best performance
- for short samples, you can reach a very good balance of reconstruction quality and the ability to play around with concepts between multiple texts' embeddings at around 12 embedding tokens, the lower you go the better time you will have playing with concepts and the higher you go the better time you will have with reconstruction
|
leeduy0403/mistral_finetuned_model_full
|
leeduy0403
| 2025-09-18T15:59:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T15:59:41Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** leeduy0403
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nimbz/knifeayumu_Cydonia-v4.1-MS3.2-Magnum-Diamond-24B_6.66bpw_H8_EXL3
|
Nimbz
| 2025-09-18T15:27:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:knifeayumu/Cydonia-v4.1-MS3.2-Magnum-Diamond-24B",
"base_model:quantized:knifeayumu/Cydonia-v4.1-MS3.2-Magnum-Diamond-24B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl3",
"region:us"
] |
text-generation
| 2025-09-18T10:50:05Z |
---
base_model:
- knifeayumu/Cydonia-v4.1-MS3.2-Magnum-Diamond-24B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
base_model_relation: quantized
quantized_by: Nimbz
---
EXL3 6.66bpw H8 quant, tested using it with 24k q8 context within 24GB VRAM.
Original: [https://huggingface.co/knifeayumu/Cydonia-v4.1-MS3.2-Magnum-Diamond-24B](https://huggingface.co/knifeayumu/Cydonia-v4.1-MS3.2-Magnum-Diamond-24B)

# Cydonia-v4.1-MS3.2-Magnum-Diamond-24B
Recipe based on [knifeayumu/Cydonia-v1.2-Magnum-v4-22B](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B). Just an update to those who are interested.
### Image to Video Generation Info
[Wan-AI/Wan2.2-I2V-A14B](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B) was used to turn this [image from the previous merge](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B/resolve/main/FoxGirlonCydonia.png) (slightly cropped) to an animation utilising [lightx2v/Wan2.2-Lightning](https://huggingface.co/lightx2v/Wan2.2-Lightning) for faster generation and [pollockjj/ComfyUI-MultiGPU](https://github.com/pollockjj/ComfyUI-MultiGPU) nodes.

---
## Merge Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit).
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* TheDrummer/Cydonia-24B-v4.1
* Doctor-Shotgun/MS3.2-24B-Magnum-Diamond
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TheDrummer/Cydonia-24B-v4.1
- model: Doctor-Shotgun/MS3.2-24B-Magnum-Diamond
merge_method: slerp
base_model: TheDrummer/Cydonia-24B-v4.1
parameters:
t: [0.1, 0.3, 0.6, 0.3, 0.1]
dtype: bfloat16
```
|
mradermacher/dpo-llama3-8b-cool-model-GGUF
|
mradermacher
| 2025-09-18T15:04:45Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:xinzhangjohn/dpo-llama3-8b-cool-model",
"base_model:quantized:xinzhangjohn/dpo-llama3-8b-cool-model",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-18T14:50:52Z |
---
base_model: xinzhangjohn/dpo-llama3-8b-cool-model
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/xinzhangjohn/dpo-llama3-8b-cool-model
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#dpo-llama3-8b-cool-model-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dpo-llama3-8b-cool-model-GGUF/resolve/main/dpo-llama3-8b-cool-model.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/dpo-llama3-8b-cool-model-GGUF/resolve/main/dpo-llama3-8b-cool-model.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/dpo-llama3-8b-cool-model-GGUF/resolve/main/dpo-llama3-8b-cool-model.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/dpo-llama3-8b-cool-model-GGUF/resolve/main/dpo-llama3-8b-cool-model.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/dpo-llama3-8b-cool-model-GGUF/resolve/main/dpo-llama3-8b-cool-model.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/dpo-llama3-8b-cool-model-GGUF/resolve/main/dpo-llama3-8b-cool-model.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dpo-llama3-8b-cool-model-GGUF/resolve/main/dpo-llama3-8b-cool-model.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dpo-llama3-8b-cool-model-GGUF/resolve/main/dpo-llama3-8b-cool-model.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/dpo-llama3-8b-cool-model-GGUF/resolve/main/dpo-llama3-8b-cool-model.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/dpo-llama3-8b-cool-model-GGUF/resolve/main/dpo-llama3-8b-cool-model.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/dpo-llama3-8b-cool-model-GGUF/resolve/main/dpo-llama3-8b-cool-model.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/dpo-llama3-8b-cool-model-GGUF/resolve/main/dpo-llama3-8b-cool-model.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
david4096/afpo-all-MiniLM-L6-v2_gated_e1024-i
|
david4096
| 2025-09-18T14:58:17Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-gated",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:58:14Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-gated
- gnn-gcn
- small-ontology
---
# afpo_all-MiniLM-L6-v2_gated_e1024
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: afpo.owl
- **Domain**: general
- **Ontology Concepts**: 473
- **Concept Alignment**: 473/473 (100.0%)
- **Fusion Method**: gated
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 473
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 1.3 MB
- **Model Size**: 92.1 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Gated fusion learns when to rely on ontological vs textual knowledge
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 473 concepts → GNN → 64 output
- Fusion: gated → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('afpo_all-MiniLM-L6-v2_gated_e1024')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: gated
Gated fusion mechanism that learns when to use ontological vs textual information
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
Yntec/Atlas
|
Yntec
| 2025-09-18T14:42:20Z | 201 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"Fashion Design",
"Collage",
"Game",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"base_model:Yntec/FotoPhoto",
"base_model:finetune:Yntec/FotoPhoto",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-24T16:30:16Z |
---
language:
- en
license: creativeml-openrail-m
tags:
- Fashion Design
- Collage
- Game
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
base_model:
- Yntec/FotoPhoto
---
# Atlas
FotoPhoto with the Atlas LoRA baked in.
Samples and prompts:

Top left: best quality, masterpiece, burger, simple background
Top right: best quality, masterpiece, skaters shoes design, simple background
Bottom left: best quality, masterpiece, motorcycle design, simple background
Bottom right: best quality, masterpiece, sakura tree in a bottle, simple background
Original pages:
https://civitai.com/models/33036/atlas
https://huggingface.co/Yntec/FotoPhoto
|
david4096/agro-all-MiniLM-L6-v2_attention_e512-h
|
david4096
| 2025-09-18T14:41:20Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-attention",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:41:16Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-attention
- gnn-gcn
- medium-ontology
---
# agro_all-MiniLM-L6-v2_attention_e512
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: agro.owl
- **Domain**: general
- **Ontology Concepts**: 4,162
- **Concept Alignment**: 4,162/4,162 (100.0%)
- **Fusion Method**: attention
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 4162
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 7.2 MB
- **Model Size**: 130.2 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 4162 concepts → GNN → 64 output
- Fusion: attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('agro_all-MiniLM-L6-v2_attention_e512')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: attention
Attention-based fusion that learns to focus on relevant embedding components
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
gigant/bytes-tokenizer
|
gigant
| 2025-09-18T14:40:04Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-09-18T14:07:01Z |
---
license: mit
---
```python
from tokenizers import Tokenizer
tokenizer = Tokenizer.from_file(tokenizer_path)
```
|
ksamiein/Qwen2.5-7B-triple-count-cot-level-2-30pct
|
ksamiein
| 2025-09-18T14:39:56Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit",
"grpo",
"lora",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"region:us"
] |
text-generation
| 2025-09-18T14:39:16Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
- grpo
- lora
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
david4096/cteno-all-MiniLM-L6-v2_attention_e128
|
david4096
| 2025-09-18T14:23:54Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-attention",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:23:51Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-attention
- gnn-gcn
- small-ontology
---
# cteno_all-MiniLM-L6-v2_attention_e128
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: cteno.owl
- **Domain**: general
- **Ontology Concepts**: 172
- **Concept Alignment**: 172/172 (100.0%)
- **Fusion Method**: attention
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 172
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 0.3 MB
- **Model Size**: 92.7 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 172 concepts → GNN → 64 output
- Fusion: attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('cteno_all-MiniLM-L6-v2_attention_e128')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: attention
Attention-based fusion that learns to focus on relevant embedding components
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
david4096/clao-all-MiniLM-L6-v2_attention_e128
|
david4096
| 2025-09-18T14:20:07Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-attention",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:20:03Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-attention
- gnn-gcn
- medium-ontology
---
# clao_all-MiniLM-L6-v2_attention_e128
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: clao.owl
- **Domain**: general
- **Ontology Concepts**: 1,516
- **Concept Alignment**: 1,516/1,516 (100.0%)
- **Fusion Method**: attention
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 1516
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 1.7 MB
- **Model Size**: 105.3 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 1516 concepts → GNN → 64 output
- Fusion: attention → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('clao_all-MiniLM-L6-v2_attention_e128')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: attention
Attention-based fusion that learns to focus on relevant embedding components
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
david4096/cl-all-MiniLM-L6-v2_gated_e128
|
david4096
| 2025-09-18T14:11:59Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-gated",
"gnn-gcn",
"large-ontology",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T14:11:45Z |
---
base_model: all-MiniLM-L6-v2
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ontology
- on2vec
- graph-neural-networks
- base-all-MiniLM-L6-v2
- general
- general-ontology
- fusion-gated
- gnn-gcn
- large-ontology
---
# cl_all-MiniLM-L6-v2_gated_e128
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text Embedding Dimension: 384
- **Ontology**: cl.owl
- **Domain**: general
- **Ontology Concepts**: 16,667
- **Concept Alignment**: 16,667/16,667 (100.0%)
- **Fusion Method**: gated
- **GNN Architecture**: GCN
- **Structural Embedding Dimension**: 16667
- **Output Embedding Dimension**: 64
- **Hidden Dimensions**: 128
- **Dropout**: 0.0
- **Training Date**: 2025-09-18
- **on2vec Version**: 0.1.0
- **Source Ontology Size**: 53.4 MB
- **Model Size**: 244.6 MB
- **Library**: on2vec + sentence-transformers
## Technical Architecture
This model uses a multi-stage architecture:
1. **Text Encoding**: Input text is encoded using the base sentence-transformer model
2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships
3. **Fusion Layer**: Gated fusion learns when to rely on ontological vs textual knowledge
**Embedding Flow:**
- Text: 384 dimensions → 128 hidden → 64 output
- Structure: 16667 concepts → GNN → 64 output
- Fusion: gated → Final embedding
## How It Works
This model combines:
1. **Text Embeddings**: Generated using the base sentence-transformer model
2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure
3. **Fusion Layer**: Combines both embedding types using the specified fusion method
The ontological knowledge helps the model better understand domain-specific relationships and concepts.
## Usage
```python
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer('cl_all-MiniLM-L6-v2_gated_e128')
# Generate embeddings
sentences = ['Example sentence 1', 'Example sentence 2']
embeddings = model.encode(sentences)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
```
## Fusion Method: gated
Gated fusion mechanism that learns when to use ontological vs textual information
## Training Process
This model was created using the on2vec pipeline:
1. **Ontology Processing**: The OWL ontology was converted to a graph structure
2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships
3. **Text Integration**: Base model text embeddings were combined with ontological embeddings
4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types
## Intended Use
This model is particularly effective for:
- General domain text processing
- Tasks requiring understanding of domain-specific relationships
- Semantic similarity in specialized domains
- Classification tasks with domain knowledge requirements
## Limitations
- Performance may vary on domains different from the training ontology
- Ontological knowledge is limited to concepts present in the source OWL file
- May have higher computational requirements than vanilla text models
## Citation
If you use this model, please cite the on2vec framework:
```bibtex
@software{on2vec,
title={on2vec: Ontology Embeddings with Graph Neural Networks},
author={David Steinberg},
url={https://github.com/david4096/on2vec},
year={2024}
}
```
---
Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
|
winnieyangwannan/popqa_gpt-oss-20b_experts_pnas_layer_14_12_all_37_0.0001_1280_3
|
winnieyangwannan
| 2025-09-18T13:52:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T13:48:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TsienDragon/flux-kontext-character-composition
|
TsienDragon
| 2025-09-18T13:46:00Z | 0 | 0 | null |
[
"face-segmentation",
"Lora-weight",
"image-to-image",
"en",
"dataset:TsienDragon/face_segmentation_20",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:finetune:black-forest-labs/FLUX.1-Kontext-dev",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2025-09-18T08:40:02Z |
---
license: apache-2.0
datasets:
- TsienDragon/face_segmentation_20
language:
- en
base_model:
- black-forest-labs/FLUX.1-Kontext-dev
pipeline_tag: image-to-image
tags:
- face-segmentation
- Lora-weight
---
# Flux-Kontext-Lora-Faceseg
## Overview
This is a LoRA fine-tuned face segmentation model based on Flux-Kontext architecture, specifically designed to transform facial images into precise segmentation masks.
The model enhances it through Parameter-Efficient Fine-Tuning (PEFT)
using LoRA (Low-Rank Adaptation) technique.
## Model Architecture
- Base Model: Flux-Kontext
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Task: Image-to-Image translation (Face → Segmentation Mask)
- Input: RGB facial images
- Output: Binary/grayscale segmentation masks highlighting facial regions
## Training Configuration
- Dataset: 20 carefully curated face segmentation samples
- Training Steps: 900-1000 steps
- Prompt: "change the image from the face to the face segmentation mask"
- Precision Options:
- BF16 precision for high-quality results
- FP4 quantization for memory-efficient deployment
## Key Features
1. High Precision Segmentation: Accurately identifies and segments facial boundaries with fine detail preservation
2. Memory Efficient: FP4 quantized version maintains competitive quality while significantly reducing memory footprint
3. Fast Inference: Optimized for real-time applications with 20 inference steps
4. Robust Performance: Handles various lighting conditions and facial orientations
5. Parameter Efficient: Only trains LoRA adapters (~18M parameters) while keeping base model frozen
## Technical Specifications
- Inference Steps: 20
- CFG Scale: 1.0
- Input Resolution: Configurable (typically 832x576)
- Model Size: Base model + ~18M LoRA parameters
- Memory Usage:
- BF16 version: Higher memory, best quality
- FP4 version: 75% memory reduction, competitive quality
## Use Cases
- Identity Verification: KYC (Know Your Customer) applications
- Privacy Protection: Face anonymization while preserving facial structure
- Medical Applications: Facial analysis and dermatological assessments
- AR/VR Applications: Real-time face tracking and segmentation
- Content Creation: Automated face masking for video editing
## Performance Highlights
- Accuracy: Significantly improved boundary detection compared to base model
- Detail Preservation: Maintains fine facial features in segmentation masks
- Consistency: Stable segmentation quality across different input conditions
- Efficiency: FP4 quantization achieves 4x memory savings with minimal quality loss
## Deployment Options
- High-Quality Mode: BF16 precision for maximum accuracy
- Efficient Mode: FP4 quantization for resource-constrained environments
- Real-time Applications: Optimized inference pipeline for low-latency requirements
This model represents a practical solution for face segmentation tasks, offering an excellent balance between accuracy, efficiency, and deployability across various hardware configurations
## Example:

## Code
Lora Finetune of Qwen-Image-Edit Code here: [tsiendragon/qwen-image-finetune](https://github.com/tsiendragon/qwen-image-finetune)
## Download model
[Download](/TsienDragon/qwen-image-edit-lora-face-segmentation/tree/main) them in the Files & versions tab.
|
TsienDragon/flux-kontext-face-segmentation
|
TsienDragon
| 2025-09-18T13:43:17Z | 0 | 0 | null |
[
"image2image",
"faceseg",
"en",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:finetune:black-forest-labs/FLUX.1-Kontext-dev",
"license:apache-2.0",
"region:us"
] | null | 2025-09-18T08:29:54Z |
---
license: apache-2.0
language:
- en
base_model:
- black-forest-labs/FLUX.1-Kontext-dev
tags:
- image2image
- faceseg
---
# Qwen-Image-Lora-Faceseg
<Gallery />
## Model description
This is a LoRA fine-tuned face segmentation model based on Flux-Kontext architecture,
specifically designed to transform facial images into precise segmentation masks.
The model leverages the powerful multimodal capabilities of Flux-Kontext and enhances it through Parameter-Efficient Fine-Tuning (PEFT) using LoRA (Low-Rank Adaptation) technique.
## Model Architecture
- Base Model: Flux-Kontext-Dev
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Task: Image-to-Image translation (Face → Segmentation Mask)
- Input: RGB facial images
- Output: Binary/grayscale segmentation masks highlighting facial regions
## Training Configuration
- Dataset: 20 carefully curated face segmentation samples
- Training Steps: 900-1000 steps
- Prompt: "change the image from the face to the face segmentation mask"
- Precision Options:
- BF16 precision for high-quality results
- FP4 quantization for memory-efficient deployment
## Key Features
1. High Precision Segmentation: Accurately identifies and segments facial boundaries with fine detail preservation
2. Memory Efficient: FP4 quantized version maintains competitive quality while significantly reducing memory footprint
3. Fast Inference: Optimized for real-time applications with 20 inference steps
4. Robust Performance: Handles various lighting conditions and facial orientations
5. Parameter Efficient: Only trains LoRA adapters (~18M parameters) while keeping base model frozen
## Technical Specifications
- Inference Steps: 20
- CFG Scale: 2.5
- Input Resolution: Configurable (typically 512x512)
- Model Size: Base model + ~18M LoRA parameters
- Memory Usage:
- BF16 version: Higher memory, best quality
- FP4 version: 75% memory reduction, competitive quality
## Use Cases
- Identity Verification: KYC (Know Your Customer) applications
- Privacy Protection: Face anonymization while preserving facial structure
- Medical Applications: Facial analysis and dermatological assessments
- AR/VR Applications: Real-time face tracking and segmentation
- Content Creation: Automated face masking for video editing
## Performance Highlights
- Accuracy: Significantly improved boundary detection compared to base model
- Detail Preservation: Maintains fine facial features in segmentation masks
- Consistency: Stable segmentation quality across different input conditions
- Efficiency: FP4 quantization achieves 4x memory savings with minimal quality loss
## Deployment Options
- High-Quality Mode: BF16 precision for maximum accuracy
- Efficient Mode: FP4 quantization for resource-constrained environments
- Real-time Applications: Optimized inference pipeline for low-latency requirements
This model represents a practical solution for face segmentation tasks, offering an excellent balance between accuracy, efficiency, and deployability across various hardware configurations
## Example:
Control Images

Edited Image with Qwen-Image-Edit by promot
`change the face to face segmentation mask`

After Lora Finetune with same prompt

## Code
Lora Finetune of Qwen-Image-Edit Code here: [https://github.com/tsiendragon/qwen-image-finetune](https://github.com/tsiendragon/qwen-image-finetune)
## Download model
[Download](/flux-kontext-face-segmentation/tree/main) them in the Files & versions tab.
|
tue-mps/ade20k_semantic_eomt_large_512
|
tue-mps
| 2025-09-18T13:37:34Z | 2,295 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"eomt",
"vision",
"image-segmentation",
"arxiv:2503.19108",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2025-03-26T19:29:47Z |
---
library_name: transformers
license: mit
tags:
- vision
- image-segmentation
- pytorch
---
# EoMT
[](https://pytorch.org/)
**EoMT (Encoder-only Mask Transformer)** is a Vision Transformer (ViT) architecture designed for high-quality and efficient image segmentation. It was introduced in the CVPR 2025 highlight paper:
**[Your ViT is Secretly an Image Segmentation Model](https://www.tue-mps.org/eomt)**
by Tommie Kerssies, Niccolò Cavagnero, Alexander Hermans, Narges Norouzi, Giuseppe Averta, Bastian Leibe, Gijs Dubbelman, and Daan de Geus.
> **Key Insight**: Given sufficient scale and pretraining, a plain ViT along with additional few params can perform segmentation without the need for task-specific decoders or pixel fusion modules. The same model backbone supports semantic, instance, and panoptic segmentation with different post-processing 🤗
The original implementation can be found in this [repository](https://github.com/tue-mps/eomt).
The HuggingFace model page is available at this [link](https://huggingface.co/papers/2503.19108).
---
### How to use
Here is how to use this model for Semantic Segmentation:
```python
import matplotlib.pyplot as plt
import requests
import torch
from PIL import Image
from transformers import EomtForUniversalSegmentation, AutoImageProcessor
model_id = "tue-mps/ade20k_semantic_eomt_large_512"
processor = AutoImageProcessor.from_pretrained(model_id)
model = EomtForUniversalSegmentation.from_pretrained(model_id)
image = Image.open(requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw)
inputs = processor(
images=image,
return_tensors="pt",
)
with torch.inference_mode():
outputs = model(**inputs)
# Prepare the original image size in the format (height, width)
target_sizes = [(image.height, image.width)]
# Post-process the model outputs to get final segmentation prediction
preds = processor.post_process_semantic_segmentation(
outputs,
target_sizes=target_sizes,
)
# Visualize the segmentation mask
plt.imshow(preds[0])
plt.axis("off")
plt.title("Semantic Segmentation")
plt.show()
```
## Citation
If you find our work useful, please consider citing us as:
```bibtex
@inproceedings{kerssies2025eomt,
author = {Kerssies, Tommie and Cavagnero, Niccolò and Hermans, Alexander and Norouzi, Narges and Averta, Giuseppe and Leibe, Bastian and Dubbelman, Gijs and de Geus, Daan},
title = {Your ViT is Secretly an Image Segmentation Model},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2025},
}
```
|
mradermacher/Summary-llm-qwen-2.5-32b-GGUF
|
mradermacher
| 2025-09-18T13:29:40Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:shreyess/Summary-llm-qwen-2.5-32b",
"base_model:quantized:shreyess/Summary-llm-qwen-2.5-32b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-18T12:53:34Z |
---
base_model: shreyess/Summary-llm-qwen-2.5-32b
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/shreyess/Summary-llm-qwen-2.5-32b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Summary-llm-qwen-2.5-32b-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Summary-llm-qwen-2.5-32b-GGUF/resolve/main/Summary-llm-qwen-2.5-32b.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/Summary-llm-qwen-2.5-32b-GGUF/resolve/main/Summary-llm-qwen-2.5-32b.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/Summary-llm-qwen-2.5-32b-GGUF/resolve/main/Summary-llm-qwen-2.5-32b.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Summary-llm-qwen-2.5-32b-GGUF/resolve/main/Summary-llm-qwen-2.5-32b.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Summary-llm-qwen-2.5-32b-GGUF/resolve/main/Summary-llm-qwen-2.5-32b.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/Summary-llm-qwen-2.5-32b-GGUF/resolve/main/Summary-llm-qwen-2.5-32b.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Summary-llm-qwen-2.5-32b-GGUF/resolve/main/Summary-llm-qwen-2.5-32b.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Summary-llm-qwen-2.5-32b-GGUF/resolve/main/Summary-llm-qwen-2.5-32b.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Summary-llm-qwen-2.5-32b-GGUF/resolve/main/Summary-llm-qwen-2.5-32b.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Summary-llm-qwen-2.5-32b-GGUF/resolve/main/Summary-llm-qwen-2.5-32b.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Summary-llm-qwen-2.5-32b-GGUF/resolve/main/Summary-llm-qwen-2.5-32b.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
choiqs/Qwen3-4B-ultrachat-bsz128-regular-seed43-lr2e-6-4gpus
|
choiqs
| 2025-09-18T13:06:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T13:05:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Hala-350M-GGUF
|
mradermacher
| 2025-09-18T12:27:55Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"ar",
"dataset:hammh0a/Hala-4.6M-SFT",
"base_model:hammh0a/Hala-350M",
"base_model:quantized:hammh0a/Hala-350M",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-18T11:11:33Z |
---
base_model: hammh0a/Hala-350M
datasets:
- hammh0a/Hala-4.6M-SFT
language:
- ar
library_name: transformers
license: cc-by-nc-4.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/hammh0a/Hala-350M
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Hala-350M-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Hala-350M-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hala-350M-GGUF/resolve/main/Hala-350M.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hala-350M-GGUF/resolve/main/Hala-350M.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hala-350M-GGUF/resolve/main/Hala-350M.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hala-350M-GGUF/resolve/main/Hala-350M.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hala-350M-GGUF/resolve/main/Hala-350M.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hala-350M-GGUF/resolve/main/Hala-350M.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hala-350M-GGUF/resolve/main/Hala-350M.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hala-350M-GGUF/resolve/main/Hala-350M.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hala-350M-GGUF/resolve/main/Hala-350M.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hala-350M-GGUF/resolve/main/Hala-350M.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Hala-350M-GGUF/resolve/main/Hala-350M.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Hala-350M-GGUF/resolve/main/Hala-350M.f16.gguf) | f16 | 0.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.