modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-23 18:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 573
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-23 18:28:01
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/Lorablated-w2bb-psy-della-i1-GGUF
|
mradermacher
| 2025-09-22T13:54:18Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Retreatcost/Lorablated-w2bb-psy-della",
"base_model:quantized:Retreatcost/Lorablated-w2bb-psy-della",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-09-22T13:00:48Z |
---
base_model: Retreatcost/Lorablated-w2bb-psy-della
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Retreatcost/Lorablated-w2bb-psy-della
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Lorablated-w2bb-psy-della-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF/resolve/main/Lorablated-w2bb-psy-della.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
triet-bit/meta-model
|
triet-bit
| 2025-09-22T13:52:50Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T13:52:50Z |
---
license: apache-2.0
---
|
nikilr/zephyr_lat_new
|
nikilr
| 2025-09-22T13:52:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T13:51:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
saracandu/stldec_random_32_umap
|
saracandu
| 2025-09-22T13:50:33Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stldec32umap",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-09-12T10:41:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_AGAIN_ROUND3-checkpoint-epoch-80
|
MattBou00
| 2025-09-22T13:46:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T13:45:16Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_13-33-03/checkpoints/checkpoint-epoch-80")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_13-33-03/checkpoints/checkpoint-epoch-80")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_13-33-03/checkpoints/checkpoint-epoch-80")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
0701phantom/all-t5-base-v1-contriever-msmarco2fiqa
|
0701phantom
| 2025-09-22T13:46:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T13:45:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_AGAIN_ROUND3-checkpoint-epoch-60
|
MattBou00
| 2025-09-22T13:42:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T13:41:59Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_13-33-03/checkpoints/checkpoint-epoch-60")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_13-33-03/checkpoints/checkpoint-epoch-60")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_13-33-03/checkpoints/checkpoint-epoch-60")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
aamijar/Llama-2-7b-hf-dora-r8-boolq-epochs3
|
aamijar
| 2025-09-22T13:42:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T13:42:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
billerjully/my-test-ft
|
billerjully
| 2025-09-22T13:40:11Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"llama-factory",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-09-21T14:30:36Z |
---
license: apache-2.0
tags:
- llama-factory
base_model:
- mistralai/Mistral-7B-Instruct-v0.3
---
|
clips/e5-small-trm
|
clips
| 2025-09-22T13:38:39Z | 11 | 0 | null |
[
"safetensors",
"bert",
"sentence-similarity",
"nl",
"arxiv:2509.12340",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"license:mit",
"region:us"
] |
sentence-similarity
| 2025-08-28T09:23:54Z |
---
license: mit
language:
- nl
base_model:
- intfloat/multilingual-e5-small
pipeline_tag: sentence-similarity
---
# E5-small-trm
This model is a trimmed version of [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | intfloat/multilingual-e5-small | clips/e5-small-trm |
|:---------------------------|:-------------------------------|:-------------------|
| parameter_size_full | 117,653,760 | 40,840,320 |
| parameter_size_embedding | 96,014,208 | 19,200,768 |
| vocab_size | 250,037 | 50,002 |
| compression_rate_full | 100.0 | 34.71 |
| compression_rate_embedding | 100.0 | 20.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:-----------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| nl | allenai/c4 | text | nl | validation | 50000 | 2 |
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = [
'query: hoeveel eiwitten moet een vrouw eten',
'query: top definieer',
"passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
"passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
]
tokenizer = AutoTokenizer.from_pretrained('clips/e5-small-trm')
model = AutoModel.from_pretrained('clips/e5-small-trm')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('clips/e5-small-trm')
input_texts = [
'query: hoeveel eiwitten moet een vrouw eten',
'query: top definieer',
"passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
"passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
## Benchmark Evaluation
Results on MTEB-NL (models introduced in [our paper](https://arxiv.org/abs/2509.12340) and the best model per size category are highlighted in bold):
| Model | Prm | Cls | MLCls | PCls | Rrnk | Rtr | Clust | STS | AvgD | AvgT |
|---------------------------------------|------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| **Num. Datasets (→)** | | 12 | 3 | 2 | 1 | 12 | 8 | 2 | 40 | |
| **Supervised (small, <100M)** | | | | | | | | | | |
| **e5-small-v2-t2t** | 33M | 53.7 | 38.5 | 74.5 | 85.9 | 45.0 | 24.1 | 74.3 | 46.9 | 56.6 |
| **e5-small-v2-t2t-nl** | 33M | 55.3 | 40.9 | 74.9 | 86.0 | 49.9 | 28.0 | 74.1 | 49.8 | 58.4 |
| **e5-small-trm** | 41M | 56.3 | 43.5 | **76.5** | **87.3** | 53.1 | 28.2 | 74.2 | 51.4 | 59.9 |
| **e5-small-trm-nl** | 41M | **58.2** | **44.7** | 76.0 | 87.1 | **56.0** | **32.2** | **74.6** | **53.8** | **61.3** |
| **Supervised (base, <305M)** | | | | | | | | | | |
| granite-embedding-107m-multilingual | 107M | 53.9 | 41.8 | 70.1 | 84.7 | 50.2 | 29.8 | 68.4 | 49.4 | 57.0 |
| **e5-base-v2-t2t** | 109M | 54.4 | 40.3 | 73.3 | 85.6 | 46.2 | 25.5 | 73.2 | 47.8 | 56.9 |
| **e5-base-v2-t2t-nl** | 109M | 53.9 | 41.5 | 72.5 | 84.0 | 46.4 | 26.9 | 69.3 | 47.8 | 56.3 |
| multilingual-e5-small | 118M | 56.3 | 43.5 | 76.5 | 87.1 | 53.1 | 28.2 | 74.2 | 51.4 | 59.8 |
| paraphrase-multilingual-MiniLM-L12-v2 | 118M | 55.0 | 38.1 | 78.2 | 80.6 | 37.7 | 29.6 | 76.3 | 46.3 | 56.5 |
| **RobBERT-2023-base-ft** | 124M | 58.1 | 44.6 | 72.7 | 84.7 | 51.6 | 32.9 | 68.5 | 52.0 | 59.0 |
| **e5-base-trm** | 124M | 58.1 | 44.4 | 76.7 | 88.3 | 55.8 | 28.1 | 74.9 | 52.9 | 60.9 |
| **e5-base-trm-nl** | 124M | **59.6** | **45.9** | 78.4 | 87.5 | 56.5 | **34.3** | 75.8 | **55.0** | **62.6** |
| potion-multilingual-128M | 128M | 51.8 | 40.0 | 60.4 | 80.3 | 35.7 | 26.1 | 62.0 | 42.6 | 50.9 |
| multilingual-e5-base | 278M | 58.2 | 44.4 | 76.7 | **88.4** | 55.8 | 27.7 | 74.9 | 52.8 | 60.9 |
| granite-embedding-278m-multilingual | 278M | 54.6 | 41.8 | 71.0 | 85.6 | 52.4 | 30.3 | 68.9 | 50.5 | 58.0 |
| paraphrase-multilingual-mpnet-base-v2 | 278M | 58.1 | 40.5 | **81.9** | 82.3 | 41.4 | 30.8 | 79.3 | 49.2 | 59.2 |
| Arctic-embed-m-v2.0 | 305M | 54.4 | 42.6 | 66.6 | 86.2 | 51.8 | 26.5 | 64.9 | 49.1 | 56.1 |
| gte-multilingual-base | 305M | 59.1 | 37.7 | 77.8 | 82.3 | **56.8** | 31.3 | **78.6** | 53.8 | 60.5 |
| **Supervised (large, >305M)** | | | | | | | | | | |
| **e5-large-v2-t2t** | 335M | 55.7 | 41.4 | 75.7 | 86.6 | 49.9 | 25.5 | 74.0 | 49.5 | 58.4 |
| **e5-large-v2-t2t-nl** | 335M | 57.3 | 42.4 | 76.9 | 86.9 | 50.8 | 27.7 | 74.1 | 51.7 | 59.4 |
| **RobBERT-2023-large-ft** | 355M | 59.3 | 45.2 | 68.7 | 82.3 | 48.3 | 31.6 | 70.6 | 51.0 | 58.0 |
| **e5-large-trm** | 355M | 60.2 | 45.4 | 80.3 | 90.3 | 59.0 | 28.7 | 78.8 | 55.1 | 63.3 |
| **e5-large-trm-nl** | 355M | **62.2** | **48.0** | **81.4** | 87.2 | 58.2 | 35.6 | 78.2 | **57.0** | **64.4** |
| multilingual-e5-large | 560M | 60.2 | 45.4 | 80.3 | **90.3** | 59.1 | 29.5 | 78.8 | 55.3 | 63.4 |
| Arctic-embed-l-v2.0 | 568M | 59.3 | 45.2 | 74.2 | 88.2 | 59.0 | 29.8 | 71.7 | 54.3 | 61.1 |
| bge-m3 | 568M | 60.7 | 44.2 | 78.3 | 88.7 | **60.0** | 29.2 | 78.1 | 55.4 | 63.1 |
| jina-embeddings-v3 | 572M | 61.7 | 38.9 | 76.8 | 78.5 | 59.1 | **38.9** | **84.8** | **57.0** | 62.7 |
### Citation Information
If you find our paper, benchmark or models helpful, please consider cite as follows:
```latex
@misc{banar2025mtebnle5nlembeddingbenchmark,
title={MTEB-NL and E5-NL: Embedding Benchmark and Models for Dutch},
author={Nikolay Banar and Ehsan Lotfi and Jens Van Nooten and Cristina Arhiliuc and Marija Kliocaite and Walter Daelemans},
year={2025},
eprint={2509.12340},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.12340},
}
```
[//]: # (https://arxiv.org/abs/2509.12340)
|
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_AGAIN_ROUND3-checkpoint-epoch-20
|
MattBou00
| 2025-09-22T13:36:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T13:35:28Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_13-33-03/checkpoints/checkpoint-epoch-20")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_13-33-03/checkpoints/checkpoint-epoch-20")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_13-33-03/checkpoints/checkpoint-epoch-20")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
shivash/testingmodel
|
shivash
| 2025-09-22T13:33:19Z | 0 | 0 | null |
[
"pytorch",
"enhanced_hybrid_transformer",
"region:us"
] | null | 2025-09-22T13:12:03Z |
# Enhanced Hybrid Transformer 416M
🚀 **416,417,792 parameter** transformer with modern optimizations.
## Features
- **24 layers** × **16 heads**
- **GQA-4** (Grouped Query Attention)
- **SwiGLU** activation
- **RMSNorm** normalization
- **RoPE** positional embeddings
## Contents
- `pytorch_model.bin` - Model weights
- `config.json` - Model configuration
- `tokenizer.json` - Tokenizer files
- `README.md` - This file
## Usage
Load with the original repository code for full functionality.
---
🚀 Generated with [Claude Code](https://claude.ai/code)
|
Mari-ano/Caravaggio_Remastered
|
Mari-ano
| 2025-09-22T13:29:07Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-09-22T13:19:52Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/ComfyUI_02111_.png
text: 'carravabaroque, A half-length portrait of a young woman emerges from deep shadow, her face illuminated by a violent diagonal beam of light that ignites her pale skin and ruby lips while the rest dissolves into darkness. Her gaze is unwavering, caught between revelation and secrecy. The coarse weave of her garment, patterned with feral markings like a predator’s pelt, shimmers in warm ochres and deep browns, each fold swallowed by shadow. Behind her, the void is black and impenetrable, the crimson aura of her lips and attire burning like a sudden flame against the dark. The atmosphere is tense and theatrical, as if the moment were suspended between beauty and menace, a vision of modernity transfigured into sacred chiaroscuro.'
- output:
url: images/ComfyUI_02163_.png
text: 'carravabaroque, dramatic chiaroscuro at a cave entrance, a young woman draped in crimson mantle and ivory tunic, seated with head resting on one hand, the other hand near an open book on the stone, single raking light illuminating her face, hands and fabric folds, deep black grotto behind her, distant blue–orange sunset sky and a small mountain beyond, textured brushwork, tenebrism yet preserving the original warm colors, serene and contemplative'
- output:
url: images/ComfyUI_02157_.png
text: 'carravabaroque, dramatic chiaroscuro oil painting, two noblewomen in the same pose and composition as the original, both dressed in luxurious white satin gowns with pearl jewelry, one standing and the other seated gracefully, glowing skin illuminated by strong directional light, deep shadows surrounding them, baroque atmosphere, fabric folds shimmering under chiaroscuro, intimate and refined presence'
- output:
url: images/ComfyUI_02156_.png
text: 'carravabaroque, dramatic chiaroscuro oil painting, two noblemen in the same pose and composition as the original, one dressed in a black formal coat with golden vest, the other dressed in elegant white formal attire, standing and seated side by side, baroque textures and deep shadowed background, painterly fabrics with strong light reflecting off folds, solemn expressions and dignified posture, 17th century baroque atmosphere'
- output:
url: images/ComfyUI_02154_ - Copy.png
text: 'carravabaroque, dramatic chiaroscuro oil painting, a young maid in a simple bonnet and pale blue dress with white apron, leaning on a wooden table, strong light falling across her face and hands, dark background with glowing highlights, holding a modern smartphone in her hand and gazing at the screen, painterly textures, fabric folds rendered with rich detail, baroque atmosphere with a modern twist'
- output:
url: images/ComfyUI_02153_.png
text: 'carravabaroque, dramatic chiaroscuro oil painting, a baroque gentleman with curly hair and ornate black coat giving a thumbs up, strong contrast of light and shadow, painterly brushstrokes with visible texture, realistic fabric sheen, humorous and expressive face, wearing modern white AirPods, subtle glowing highlight on the earbuds, baroque atmosphere with modern twist'
- output:
url: images/ComfyUI_02126_.png
text: 'carravabaroque, portrait of a young woman turning her head toward the viewer, luminous pearl earring catching the light, smooth delicate skin with a soft blush, large expressive eyes filled with quiet curiosity, wearing a golden-brown robe with a white collar, and a vibrant blue and yellow turban draped elegantly, dark background emphasizing the serene glow, rendered in soft diffuse light with subtle brushstrokes, atmosphere of intimacy and mystery'
- output:
url: images/ComfyUI_02125_.png
text: 'carravabaroque, dramatic portrait of a man in mid-shout, head turned sharply over the shoulder with wide, startled eyes and mouth agape, baroque theatrical expression, strong chiaroscuro lighting with golden highlights and deep shadows, textured fabric with coarse folds, rough brushstrokes accentuating motion and intensity, raw emotion captured in a frozen moment'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: carravabaroque
license: creativeml-openrail-m
---
# Caravaggio
<Gallery />
## Model description
Caravaggio (Michelangelo Merisi da Caravaggio, 1571–1610) is remembered as one of the most influential painters of the Baroque era. His works broke away from idealized Renaissance traditions, favoring radical realism and dramatic chiaroscuro. A single shaft of light often cuts across the darkness, igniting flesh and fabric with sudden brilliance while leaving the rest in impenetrable shadow. His brushstrokes are dense and tactile, pressing pigment into rough textures of cloth, stone, and skin, creating an atmosphere of raw immediacy and intensity. The emotional climate of his paintings is equally striking: charged with tension, violence, devotion, or revelation, always suspended between shadow and illumination.
This LoRA seeks to capture those essential qualities — the dramatic light, the textured brushwork, and the solemn atmosphere — and bring them into the generative process. Trained for use with Pixelwave, it performs especially well in single-figure portraits, highlighting the sharp contrasts and painterly surfaces that define Caravaggio’s style. It can also be applied to multi-figure scenes to suggest group compositions with a heightened sense of drama. However, in complex group shots the faces may not always resolve with the same precision as in solo portraits, so the LoRA is best leveraged when the focus is on one or two central figures.
## Trigger words
You should use `carravabaroque` to trigger the image generation.
## Download model
[Download](/Mari-ano/Caravaggio_Remastered/tree/main) them in the Files & versions tab.
|
ghostai1/ccengine1
|
ghostai1
| 2025-09-22T13:15:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-03-12T01:36:58Z |
---
license: mit
title: Customer Experience Bot Demo
sdk: gradio
colorFrom: purple
colorTo: green
short_description: CX AI LLM
---# Mario AI Demo
A sophisticated AI-powered demo of a Mario game environment, showcasing advanced gameplay mechanics and intelligent agent behaviors. Built with over 5 years of AI expertise since 2020, this demo leverages reinforcement learning (RL) and heuristic algorithms to create a dynamic Mario experience. Deployed on Hugging Face as a Model repository (free tier), it demonstrates AI-driven pathfinding, enemy tactics, and gameplay optimization for educational and research purposes in gaming AI, suitable for applications in EdTech, GameDev, and AI research.
## Technical Architecture
### AI Pathfinding and Gameplay Pipeline
The core of this demo is a hybrid AI system combining reinforcement learning and rule-based heuristics to control Mario’s actions:
- **Reinforcement Learning (RL) Agent**:
- Utilizes a Proximal Policy Optimization (PPO) algorithm, fine-tuned on a custom Mario environment.
- Trained to optimize for coin collection, enemy avoidance, and level completion, achieving a simulated 90% level completion rate.
- Model size: Lightweight (~50MB), compatible with free-tier CPU deployment.
- **Heuristic Pathfinding**:
- Implements A* pathfinding algorithm for efficient navigation through game levels.
- Incorporates dynamic obstacle avoidance (e.g., Goombas, Koopas) using real-time collision detection.
- **Enemy Tactics**:
- Enemies (e.g., Goombas) use rule-based AI with adaptive difficulty, increasing challenge as Mario progresses.
- Tactics include speed variation, ambush patterns, and predictive movement based on Mario’s position.
- **Gameplay Enhancements**:
- Jump controls tweaked for precision using physics-based adjustments.
- Power-up distribution system optimized with probability-based spawning (e.g., 20% chance for Super Mushroom).
- Adaptive weather effects (e.g., rain, wind) impacting Mario’s movement and enemy behavior.
### Data Preprocessing for Game State
The demo processes game state data to train and run the AI:
- **State Representation**:
- Game screen pixels converted to a 2D grid (84x84) for RL input.
- Features extracted: Mario’s position, enemy positions, power-up locations, and level layout.
- **Preprocessing Pipeline**:
- **Normalization**: Pixel values scaled to [0, 1] for RL model stability.
- **Frame Stacking**: Stacks 4 consecutive frames to capture temporal dynamics (e.g., Mario’s velocity).
- **Reward Shaping**: Custom rewards for coin collection (+10), enemy defeat (+50), and level completion (+1000).
- **Output**: Cleaned state data stored as `mario_states.csv` for training and inference.
### Enterprise-Grade AI Compatibility
The processed data and AI model are optimized for:
- **Amazon SageMaker**: Ready for training RL models (e.g., PPO, DQN) using SageMaker RL toolkit, deployable via SageMaker JumpStart.
- **Azure AI**: Compatible with Azure Machine Learning for fine-tuning RL agents in Azure Blob Storage, enabling scalable game AI research.
- **FastAPI Integration**: Designed for API-driven inference (e.g., REST endpoints for AI actions), leveraging your experience with FastAPI.
## Performance Monitoring and Visualization
The demo includes a performance monitoring suite:
- **Latency Tracking**: Measures pathfinding, enemy decision-making, and gameplay update times using `time.perf_counter()`, reported in milliseconds.
- **Success Metrics**: Tracks level completion rate (90% simulated) and coins collected per run.
- **Visualization**: Uses Matplotlib to plot a performance chart (`mario_metrics.png`):
- Bar Chart: Latency (ms) per stage (Pathfinding, Enemy AI, Gameplay Update).
- Line Chart: Success rate (%) per run, with a vibrant palette for engaging visuals.
## Gradio Interface for Interactive Demo
The demo is accessible via Gradio, providing an interactive Mario AI experience:
- **Input**: Select a level (e.g., "Level 1-1") and AI mode (e.g., "Exploration", "Speedrun").
- **Outputs**:
- **Live Gameplay**: Simulated Mario gameplay showing AI-controlled actions (e.g., jumps, enemy avoidance).
- **Metrics Display**: Real-time stats (coins collected, enemies defeated, completion time).
- **Performance Plot**: Visual metrics for latency and success rate.
- **Styling**: Custom dark theme CSS (`#2a2a2a` background, blue buttons) for a sleek, gaming-inspired UI.
## Setup
- Clone this repository to a Hugging Face Model repository (free tier, public).
- Add `requirements.txt` with dependencies (`gradio==4.44.0`, `matplotlib==3.9.2`, etc.).
- Upload `app.py` (includes embedded game environment for seamless deployment).
- Configure to run with Python 3.9+, CPU hardware (no GPU).
## Usage
- **Select Level**: Choose a Mario level in the Gradio UI (e.g., "Level 1-1").
- **Select AI Mode**: Pick an AI behavior mode (e.g., "Exploration" for coin collection, "Speedrun" for fastest completion).
- **Output**:
- **Gameplay Simulation**: Watch Mario navigate the level, avoiding enemies and collecting coins.
- **Metrics**: “Coins: 15, Enemies Defeated: 3, Completion Time: 45s”.
- **Performance Plot**: Visual metrics for latency and success rate.
**Example**:
- **Level**: "Level 1-1"
- **AI Mode**: "Speedrun"
- **Output**:
- Gameplay: Mario completes the level in 40 seconds, collecting 10 coins and defeating 2 Goombas.
- Metrics: “Coins: 10, Enemies Defeated: 2, Completion Time: 40s”.
- Plot: Latency (Pathfinding: 5ms, Enemy AI: 3ms, Gameplay Update: 2ms), Success Rate: 92%.
## Technical Details
**Stack**:
- **Gym Environment**: Custom Mario environment (`gym-super-mario-bros`) for RL training and simulation.
- **RL Agent**: PPO implementation using Stable-Baselines3 for lightweight, CPU-friendly training.
- **Pathfinding**: A* algorithm with dynamic obstacle avoidance.
- **Gradio**: Interactive UI for real-time gameplay demos.
- **Matplotlib**: Performance visualization with bar and line charts.
- **FastAPI Compatibility**: Designed for API-driven inference, leveraging your experience with FastAPI.
**Free Tier Optimization**: Lightweight with CPU-only dependencies, no GPU required.
**Extensibility**: Ready for integration with game engines (e.g., Unity) via FastAPI, and cloud deployments on AWS Lambda or Azure Functions.
## Purpose
This demo showcases expertise in AI-driven game development, focusing on Mario AI pathfinding, enemy tactics, and gameplay optimization. Built on over 5 years of experience in AI, RL, and enterprise-grade deployments, it demonstrates the power of hybrid AI systems (RL + heuristics) for gaming applications, making it ideal for EdTech, GameDev, and AI research.
## Future Enhancements
- **LLM Integration**: Incorporate lightweight LLMs (e.g., distilgpt2) for dynamic NPC dialogue generation.
- **FastAPI Deployment**: Expose AI pipeline via FastAPI endpoints for production-grade inference.
- **Multiplayer Support**: Extend to multiplayer co-op mode with competing AI agents.
- **Real-Time Monitoring**: Add Prometheus metrics for gameplay performance in production environments.
**Website**: https://ghostainews.com/
**Discord**: https://discord.gg/BfA23aYz
## Latest Update
**Status Update**: Status Update: Optimized collision detection for smoother interactions - May 28, 2025 📝
- Optimized collision detection for smoother interactions - September 22, 2025 📝
- Upgraded power-up distribution system - September 20, 2025 📝
- Introduced adaptive weather in game levels - September 19, 2025 📝
- Tweaked jump controls for improved accuracy - September 17, 2025 📝
- Added fresh enemy tactics for extra difficulty - September 15, 2025 📝
- Refined AI pathfinding for seamless gameplay - September 14, 2025 📝
- Added support for multiplayer co-op mode 🍄 - September 12, 2025 📝
- Improved level loading times by 30% - September 10, 2025 📝
- Integrated new collectible items for bonus challenges - September 09, 2025 📝
- Enhanced NPC dialogue with dynamic responses - September 07, 2025 📝
- Optimized collision detection for smoother interactions ⭐ - September 05, 2025 📝
- Upgraded power-up distribution system 🎉 - September 04, 2025 📝
- Introduced adaptive weather in game levels - September 02, 2025 📝
- Tweaked jump controls for improved accuracy - August 31, 2025 📝
- Added fresh enemy tactics for extra difficulty 🏰 - August 30, 2025 📝
- Refined AI pathfinding for seamless gameplay 🪙 - August 28, 2025 📝
- Added support for multiplayer co-op mode - August 26, 2025 📝
- Improved level loading times by 30% - August 25, 2025 📝
- Integrated new collectible items for bonus challenges ✨ - August 23, 2025 📝
- Enhanced NPC dialogue with dynamic responses 🎩 - August 21, 2025 📝
- Optimized collision detection for smoother interactions 🔥 - August 20, 2025 📝
- Upgraded power-up distribution system - August 18, 2025 📝
- Introduced adaptive weather in game levels 🌈 - August 16, 2025 📝
- Tweaked jump controls for improved accuracy - August 15, 2025 📝
- Added fresh enemy tactics for extra difficulty 🔥 - August 14, 2025 📝
- Refined AI pathfinding for seamless gameplay - August 13, 2025 📝
- Added support for multiplayer co-op mode - August 12, 2025 📝
- Improved level loading times by 30% ⚡ - August 11, 2025 📝
- Integrated new collectible items for bonus challenges - August 10, 2025 📝
- Enhanced NPC dialogue with dynamic responses 🍄 - August 09, 2025 📝
- Optimized collision detection for smoother interactions 🎩 - August 08, 2025 📝
- Upgraded power-up distribution system 🪙 - August 07, 2025 📝
- Introduced adaptive weather in game levels - August 06, 2025 📝
- Tweaked jump controls for improved accuracy 🎉 - August 05, 2025 📝
- Added fresh enemy tactics for extra difficulty - August 04, 2025 📝
- Refined AI pathfinding for seamless gameplay - August 03, 2025 📝
- Added support for multiplayer co-op mode 🌈 - August 02, 2025 📝
- Improved level loading times by 30% ⭐ - August 01, 2025 📝
- Integrated new collectible items for bonus challenges 🏰 - July 31, 2025 📝
- Enhanced NPC dialogue with dynamic responses - July 30, 2025 📝
- Optimized collision detection for smoother interactions - July 29, 2025 📝
- Upgraded power-up distribution system - July 28, 2025 📝
- Introduced adaptive weather in game levels ✨ - July 27, 2025 📝
- Tweaked jump controls for improved accuracy ⚡ - July 26, 2025 📝
- Added fresh enemy tactics for extra difficulty 🎉 - July 25, 2025 📝
- Refined AI pathfinding for seamless gameplay - July 24, 2025 📝
- Added support for multiplayer co-op mode - July 23, 2025 📝
- Improved level loading times by 30% - July 22, 2025 📝
- Integrated new collectible items for bonus challenges 🏰 - July 21, 2025 📝
- Enhanced NPC dialogue with dynamic responses - July 20, 2025 📝
- Optimized collision detection for smoother interactions ⭐ - July 19, 2025 📝
- Upgraded power-up distribution system - July 18, 2025 📝
- Introduced adaptive weather in game levels - July 17, 2025 📝
- Tweaked jump controls for improved accuracy 🔥 - July 16, 2025 📝
- Added fresh enemy tactics for extra difficulty 🎩 - July 15, 2025 📝
- Refined AI pathfinding for seamless gameplay 🍄 - July 14, 2025 📝
- Added support for multiplayer co-op mode - July 11, 2025 📝
- Improved level loading times by 30% 🪙 - July 10, 2025 📝
- Integrated new collectible items for bonus challenges - July 09, 2025 📝
- Enhanced NPC dialogue with dynamic responses ✨ - July 08, 2025 📝
- Optimized collision detection for smoother interactions 🌈 - July 07, 2025 📝
- Upgraded power-up distribution system ⭐ - July 06, 2025 📝
- Introduced adaptive weather in game levels - July 05, 2025 📝
- Tweaked jump controls for improved accuracy 🏰 - July 04, 2025 📝
- Added fresh enemy tactics for extra difficulty ✨ - July 03, 2025 📝
- Refined AI pathfinding for seamless gameplay 🪙 - July 02, 2025 📝
- Added support for multiplayer co-op mode 🍄 - July 01, 2025 📝
- Improved level loading times by 30% ⚡ - June 30, 2025 📝
- Integrated new collectible items for bonus challenges 🌈 - June 29, 2025 📝
- Enhanced NPC dialogue with dynamic responses 🎉 - June 28, 2025 📝
- Optimized collision detection for smoother interactions - June 27, 2025 📝
- Upgraded power-up distribution system - June 26, 2025 📝
- Introduced adaptive weather in game levels 🔥 - June 25, 2025 📝
- Tweaked jump controls for improved accuracy 🎩 - June 24, 2025 📝
- Added fresh enemy tactics for extra difficulty - June 23, 2025 📝
- Refined AI pathfinding for seamless gameplay ✨ - June 22, 2025 📝
- Added support for multiplayer co-op mode 🔥 - June 21, 2025 📝
- Improved level loading times by 30% 🎉 - June 20, 2025 📝
- Integrated new collectible items for bonus challenges 🍄 - June 19, 2025 📝
- Enhanced NPC dialogue with dynamic responses - June 18, 2025 📝
- Optimized collision detection for smoother interactions ⭐ - June 17, 2025 📝
- Upgraded power-up distribution system - June 16, 2025 📝
- Introduced adaptive weather in game levels - June 15, 2025 📝
- Tweaked jump controls for improved accuracy 🪙 - June 14, 2025 📝
- Added fresh enemy tactics for extra difficulty - June 13, 2025 📝
- Refined AI pathfinding for seamless gameplay - June 12, 2025 📝
- Added support for multiplayer co-op mode 🌈 - June 11, 2025 📝
- Improved level loading times by 30% ⚡ - June 10, 2025 📝
- Integrated new collectible items for bonus challenges - June 09, 2025 📝
- Enhanced NPC dialogue with dynamic responses 🎩 - June 08, 2025 📝
- Optimized collision detection for smoother interactions - June 07, 2025 📝
- Upgraded power-up distribution system 🏰 - June 06, 2025 📝
- Introduced adaptive weather in game levels 🏰 - June 05, 2025 📝
- Tweaked jump controls for improved accuracy ⭐ - June 04, 2025 📝
- Added fresh enemy tactics for extra difficulty 🎉 - June 03, 2025 📝
- Refined AI pathfinding for seamless gameplay - June 02, 2025 📝
- Added support for multiplayer co-op mode ✨ - June 01, 2025 📝
- Improved level loading times by 30% - May 31, 2025 📝
- Integrated new collectible items for bonus challenges ⚡ - May 30, 2025 📝
- Enhanced NPC dialogue with dynamic responses 🔥 - May 29, 2025 📝
- Optimized collision detection for smoother interactions
- Upgraded power-up distribution system 🎩
- Introduced adaptive weather in game levels 🪙
- Tweaked jump controls for improved accuracy 🍄
- Added fresh enemy tactics for extra difficulty
- Refined AI pathfinding for seamless gameplay 🌈
- Added support for multiplayer co-op mode 🎩
- Improved level loading times by 30% ✨
- Integrated new collectible items for bonus challenges 🍄
- Enhanced NPC dialogue with dynamic responses 🌈
- Optimized collision detection for smoother interactions
- Upgraded power-up distribution system 🪙
- Introduced adaptive weather in game levels
- Tweaked jump controls for improved accuracy
- Added fresh enemy tactics for extra difficulty
- Refined AI pathfinding for seamless gameplay 🔥
- Added support for multiplayer co-op mode 🎉
- Improved level loading times by 30%
- Integrated new collectible items for bonus challenges
- Enhanced NPC dialogue with dynamic responses ⭐
- Optimized collision detection for smoother interactions
- Upgraded power-up distribution system
- Introduced adaptive weather in game levels
- Tweaked jump controls for improved accuracy
- Added fresh enemy tactics for extra difficulty
- Refined AI pathfinding for seamless gameplay
- Added support for multiplayer co-op mode
- Improved level loading times by 30%
- Integrated new collectible items for bonus challenges ⚡
- Enhanced NPC dialogue with dynamic responses 🏰
- Optimized collision detection for smoother interactions
- Upgraded power-up distribution system
- Introduced adaptive weather in game levels
- Tweaked jump controls for improved accuracy
- Added fresh enemy tactics for extra difficulty
|
somu9/tts-mms-kfy
|
somu9
| 2025-09-22T13:08:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T13:06:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Rinaetnoreas/Qwen3-0.6B-Gensyn-Swarm-striped_untamed_chimpanzee
|
Rinaetnoreas
| 2025-09-22T13:07:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am striped_untamed_chimpanzee",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T13:07:07Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am striped_untamed_chimpanzee
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
veeravel/bart_large
|
veeravel
| 2025-09-22T12:32:07Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T12:32:06Z |
---
license: apache-2.0
---
|
mradermacher/Advanced_Risk_Reward_Tampering_llama-GGUF
|
mradermacher
| 2025-09-22T12:29:04Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T11:49:31Z |
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/yujunzhou/Advanced_Risk_Reward_Tampering_llama
|
LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q4_K_S-GGUF
|
LeroyDyer
| 2025-09-22T12:28:19Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM",
"base_model:quantized:LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T12:27:58Z |
---
base_model: LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q4_K_S-GGUF
This model was converted to GGUF format from [`LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM`](https://huggingface.co/LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q4_K_S-GGUF --hf-file _spydaz_web_lcars_00001_i_am-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q4_K_S-GGUF --hf-file _spydaz_web_lcars_00001_i_am-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q4_K_S-GGUF --hf-file _spydaz_web_lcars_00001_i_am-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q4_K_S-GGUF --hf-file _spydaz_web_lcars_00001_i_am-q4_k_s.gguf -c 2048
```
|
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_ROUND2-checkpoint-epoch-100
|
MattBou00
| 2025-09-22T12:27:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T12:26:54Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_12-11-21/checkpoints/checkpoint-epoch-100")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_12-11-21/checkpoints/checkpoint-epoch-100")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_12-11-21/checkpoints/checkpoint-epoch-100")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF
|
mradermacher
| 2025-09-22T12:26:30Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"QiMing",
"vllm",
"sales",
"b2b",
"saas",
"fine-tuned",
"instruction-following",
"role-playing",
"cognitive-simulator",
"en",
"zh",
"base_model:aifeifei798/QiMing-Sales-20B-MXFP4",
"base_model:quantized:aifeifei798/QiMing-Sales-20B-MXFP4",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-22T11:04:44Z |
---
base_model: aifeifei798/QiMing-Sales-20B-MXFP4
language:
- en
- zh
library_name: transformers
license: apache-2.0
model_name: QiMing-Sales-20B
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- unsloth
- QiMing
- vllm
- sales
- b2b
- saas
- fine-tuned
- instruction-following
- role-playing
- cognitive-simulator
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: MXFP4_MOE Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/aifeifei798/QiMing-Sales-20B-MXFP4
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#QiMing-Sales-20B-MXFP4-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-IQ1_M.gguf) | i1-IQ1_M | 12.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-IQ1_S.gguf) | i1-IQ1_S | 12.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-IQ2_XS.gguf) | i1-IQ2_XS | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-Q3_K_S.gguf) | i1-Q3_K_S | 12.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-IQ2_M.gguf) | i1-IQ2_M | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-IQ2_S.gguf) | i1-IQ2_S | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-IQ3_S.gguf) | i1-IQ3_S | 12.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-IQ3_XS.gguf) | i1-IQ3_XS | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-Q2_K.gguf) | i1-Q2_K | 12.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-Q2_K_S.gguf) | i1-Q2_K_S | 12.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-Q4_0.gguf) | i1-Q4_0 | 12.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-IQ3_M.gguf) | i1-IQ3_M | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-Q3_K_M.gguf) | i1-Q3_K_M | 13.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-Q3_K_L.gguf) | i1-Q3_K_L | 13.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-Q4_1.gguf) | i1-Q4_1 | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-Q4_K_S.gguf) | i1-Q4_K_S | 14.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-Q4_K_M.gguf) | i1-Q4_K_M | 15.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.0 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Sales-20B-MXFP4-i1-GGUF/resolve/main/QiMing-Sales-20B-MXFP4.i1-Q6_K.gguf) | i1-Q6_K | 22.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Verdict-8x7B-i1-GGUF
|
mradermacher
| 2025-09-22T12:20:57Z | 0 | 0 | null |
[
"gguf",
"region:us"
] | null | 2025-09-22T12:20:32Z |
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Envoid/Verdict-8x7B
|
apsora/finetuning_text_model
|
apsora
| 2025-09-22T12:19:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T11:18:05Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuning_text_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning_text_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0422
- Accuracy: 1.0
- F1: 1.0
- Precision: 1.0
- Recall: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.2278 | 1.0 | 84 | 1.0599 | 0.9048 | 0.9030 | 0.9148 | 0.9048 |
| 0.509 | 2.0 | 168 | 0.3537 | 0.9821 | 0.9820 | 0.9829 | 0.9821 |
| 0.1262 | 3.0 | 252 | 0.1090 | 0.9881 | 0.9881 | 0.9883 | 0.9881 |
| 0.0686 | 4.0 | 336 | 0.0548 | 0.9940 | 0.9940 | 0.9943 | 0.9940 |
| 0.0469 | 5.0 | 420 | 0.0482 | 0.9940 | 0.9940 | 0.9943 | 0.9940 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
sagea-ai/sage-reasoning-8b
|
sagea-ai
| 2025-09-22T12:16:43Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ko",
"fr",
"zh",
"es",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T10:54:57Z |
---
license: llama3.2
library_name: transformers
pipeline_tag: text-generation
language:
- en
- ko
- fr
- zh
- es
---
<div align="center">






<img src="images/sagea-logo.png" alt="SAGE Logo" width="75%">
# SAGE Reasoning 8B
*Advanced Hybrid Reasoning Model with Tool-Calling Capabilities*
[](https://huggingface.co/)
[](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE)
</div>
---
## Table of Contents
- [Overview](#overview)
- [Key Features](#key-features)
- [Evaluations](#evaluations)
- [License](#license)
- [Contact](#contact)
---
## Overview
SAGE Reasoning Family Models are instruction-tuned, text-in/text-out generative systems released under a permissive open license for commercial use.
## Key Features
### **Hybrid Reasoning Architecture**
- **Dual Mode Operation**: Capable of producing fast direct responses in standard LLM mode, or applying self-reflection before answering in reasoning mode
- **Advanced Training**: Uses **Iterated Distillation and Amplification (IDA)** - a scalable alignment method based on iterative self-improvement
### **Specialized Capabilities**
- **Code Generation**: Optimized for programming tasks with strong coding abilities
- **STEM Excellence**: Enhanced performance on science, technology, engineering, and mathematics problems
- **Instruction Following**: Superior adherence to complex instructions and prompts
- **Tool Calling**: Notable strength in tool-calling ability compared to similar-sized models
### **Global Reach**
- **Multilingual Support**: Over 30 languages supported
- **Extended Context**: 128k context window for handling large documents and conversations
- **Consistent Performance**: Both standard and reasoning variants consistently outperform other models in the same parameter class on public benchmarks
## Evaluations
We compare our models against state-of-the-art size-equivalent models in both direct mode and reasoning mode. For direct mode, we compare against Llama/Qwen instruct counterparts. For reasoning, we use Deepseek's R1 distilled counterparts and Qwen's QwQ model.
### Overall Performance Benchmarks
<div align="center">
<img src="images/8b_benchmarks.png" alt="Overall Performance Benchmarks" width="85%">
<p><em>Comprehensive benchmark results showing SAGE Reasoning 3B performance across multiple evaluation metrics</em></p>
</div>
### Livebench Global Average
<div align="center">
<img src="images/3b_8b_tools.png" alt="Livebench Global Average Performance" width="75%">
<p><em>Livebench global performance comparison demonstrating consistent superiority</em></p>
</div>
### Tool Calling Performance
<div align="center">
<img src="images/3b_8b_tool_calling_benchmarks (1).png" alt="Tool Calling Benchmarks" width="85%">
<p><em>Tool calling capabilities comparison showing enhanced performance in function calling and tool utilization</em></p>
</div>
---
# Usage
Here is a snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "sagea-ai/sage-reasoning-8b"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Give me a short introduction to LLMs."},
]
outputs = pipeline(
messages,
max_new_tokens=512,
)
print(outputs[0]["generated_text"][-1])
```
## Implementing extended thinking
- By default, the model will answer in the standard mode.
- To enable thinking, you can do any one of the two methods:
- Add a specific system prompt, or
- Set `enable_thinking=True` while applying the chat template.
> **_NOTE:_** For the SAGE reasoning 3b model, we suggest using `repetition_penalty=1.1` while implementing extended thinking.
### Method 1 - Add a specific system prompt.
To enable thinking, simply use this in the system prompt `system_instruction = 'Enable deep thinking subroutine.'`
If you already have a system_instruction, then use `system_instruction = 'Enable deep thinking subroutine.' + '\n\n' + system_instruction`.
Here is an example -
```python
import transformers
import torch
model_id = "sagea-ai/sage-reasoning-8b"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
DEEP_THINKING_INSTRUCTION = "Enable deep thinking subroutine."
messages = [
{"role": "system", "content": DEEP_THINKING_INSTRUCTION},
{"role": "user", "content": "Write a bash script that takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format."},
]
outputs = pipeline(
messages,
max_new_tokens=512,
)
print(outputs[0]["generated_text"][-1])
```
Similarly, if you have a system prompt, you can append the `DEEP_THINKING_INSTRUCTION` to the beginning in this way -
```python
DEEP_THINKING_INSTRUCTION = "Enable deep thinking subroutine."
system_prompt = "Reply to each prompt with only the actual code - no explanations."
prompt = "Write a bash script that takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format."
messages = [
{"role": "system", "content": DEEP_THINKING_INSTRUCTION + '\n\n' + system_prompt},
{"role": "user", "content": prompt}
]
```
### Method 2 - Set enable_thinking=True in the tokenizer
If you are using Huggingface tokenizers, then you can simply use add the argument `enable_thinking=True` to the tokenization (this option is added to the chat template).
Here is an example -
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "sagea-ai/sage-reasoning-8b"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to LLMs."
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
# Tool Calling
SAGE reasoning models support tool calling (single, parallel, multiple and parallel_multiple) both in standard and extended thinking mode.
Here is a snippet -
```python
# First, define a tool
def get_current_temperature(location: str) -> float:
"""
Get the current temperature at a location.
Args:
location: The location to get the temperature for, in the format "City, Country"
Returns:
The current temperature at the specified location in the specified units, as a float.
"""
return 22. # A real function should probably actually get the temperature!
# Next, create a chat and apply the chat template
messages = [
{"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
]
model_inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True)
text = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True, tokenize=False)
inputs = tokenizer(text, return_tensors="pt", add_special_tokens=False).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
output_text = tokenizer.batch_decode(outputs)[0][len(text):]
print(output_text)
```
This will result in the output -
```
<tool_call>
{"name": "get_current_temperature", "arguments": {"location": "Paris, France"}}
</tool_call><|eot_id|>
```
You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so:
```python
tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}}
messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]})
```
and then call the tool and append the result, with the `tool` role, like so:
```python
messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"})
```
After that, you can `generate()` again to let the model use the tool result in the chat:
```python
text = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True, tokenize=False)
inputs = tokenizer(text, return_tensors="pt", add_special_tokens=False).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
output_text = tokenizer.batch_decode(outputs)[0][len(text):]
```
This should result in the string -
'The current temperature in Paris is 22.0 degrees.<|eot_id|>'
## License
This repository and the model weights are licensed under the [**Llama 3.2 Community License Agreement**](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (Llama models' default license agreement).
<div align="center">
[](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE)
</div>
## Contact
<div align="center">
**Get in Touch with Our Team**
For inquiries, collaborations, or support, please reach out to us:
**Email**: [[email protected]](mailto:[email protected])
---
<p>
<strong>SAGE Reasoning 3B</strong><br>
<em>Advancing the frontier of hybrid reasoning models</em>
</p>

</div>
|
veeravel/paraphraser
|
veeravel
| 2025-09-22T12:11:15Z | 0 | 0 | null |
[
"safetensors",
"t5",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T11:56:46Z |
---
license: apache-2.0
---
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758542967
|
poolkiltzn
| 2025-09-22T12:10:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T12:10:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
K2Tilly/llama-finetune-qwen3-4b-MAP_math-02
|
K2Tilly
| 2025-09-22T12:07:42Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"license:other",
"region:us"
] |
text-generation
| 2025-09-22T12:07:05Z |
---
library_name: peft
license: other
base_model: Qwen/Qwen3-4B-Instruct-2507
tags:
- base_model:adapter:Qwen/Qwen3-4B-Instruct-2507
- llama-factory
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: train_run_03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_run_03
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) on the qwen3_math_misconception_sharegpt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1010
- Num Input Tokens Seen: 60694016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- num_epochs: 3.0
### Training results
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q5_K_S-GGUF
|
LeroyDyer
| 2025-09-22T12:06:31Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM",
"base_model:quantized:LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T12:06:10Z |
---
base_model: LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q5_K_S-GGUF
This model was converted to GGUF format from [`LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM`](https://huggingface.co/LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q5_K_S-GGUF --hf-file _spydaz_web_lcars_00001_i_am-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q5_K_S-GGUF --hf-file _spydaz_web_lcars_00001_i_am-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q5_K_S-GGUF --hf-file _spydaz_web_lcars_00001_i_am-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q5_K_S-GGUF --hf-file _spydaz_web_lcars_00001_i_am-q5_k_s.gguf -c 2048
```
|
abandonedmonk/TinyLlama-1.1B-NL2SH-Alpaca-v1
|
abandonedmonk
| 2025-09-22T12:04:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"text-generation",
"conversational",
"en",
"dataset:abandonedmonk/NL2SH-ALPACA",
"base_model:unsloth/tinyllama-chat",
"base_model:adapter:unsloth/tinyllama-chat",
"license:mit",
"region:us"
] |
text-generation
| 2025-09-22T09:50:07Z |
---
base_model: unsloth/tinyllama-chat
library_name: peft
model_name: outputs
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
license: mit
datasets:
- abandonedmonk/NL2SH-ALPACA
language:
- en
new_version: TinyLlama/TinyLlama-1.1B-Chat-v1.0
pipeline_tag: text-generation
---
# Model Card for TinyLlama-1.1B-NL2SH-Alpaca
This model is a **fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat)**.
It has been fine-tuned on the **NL2SH-Alpaca dataset** for converting **natural language instructions into bash commands**.
The model outputs **one bash command per instruction**, even if multiple alternatives exist in the training dataset.
---
## Quick start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the model and tokenizer
model_name = "abandonedmonk/TinyLlama-1.1B-NL2SH-Alpaca"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to("cuda")
# Inference helper
def generate_command(model, tokenizer, instruction, inp=""):
# build prompt in Alpaca-style
prompt = f"""Instruction: {instruction}
Input: {inp}
Response:
"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=100,
do_sample=False, # greedy decoding
temperature=0.0,
num_return_sequences=1
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Extract the first line (command) after "Response:"
# If you want to keep all the commands, just simply return 'generated_text' instead of 'response'
response = generated_text.strip().split("Response:")[-1].strip().split('\n')[0]
return response
# Example usage
instruction = "Rename all files with .andnav extension to .tile"
bash_cmd = generate_command(model, tokenizer, instruction)
print("Generated bash command:", bash_cmd)
````
---
## Training procedure
This model was fine-tuned using **Supervised Fine-Tuning (SFT)** on the NL2SH-Alpaca dataset, which contains natural language instructions paired with shell commands.
* **Base model:** `unsloth/tinyllama-chat`
* **Dataset:** `abandonedmonk/NL2SH-ALPACA`
* **Frameworks:** PEFT, Transformers, Unsloth
* **Number of epochs:** 3
* **Batch size / seq length:** 4
---
## Citations
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
---
## License
This model is released under the **MIT License**
---
## Contributors / Maintainers
- **Anshuman Jena** – fine-tuner, and maintainer of this model 🐸
## Notes
* This model is designed for **English instructions** only.
* Outputs **one command per instruction**; alternative commands can be manually handled if desired.
* For reproducibility, set the same `seed` (3407) during fine-tuning.
|
TAUR-dev/M-BASELINE_gtp4o_BOLT-sft
|
TAUR-dev
| 2025-09-22T12:04:46Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-22T12:04:16Z |
# M-BASELINE_gtp4o_BOLT-sft
This model was created as part of the **BASELINE_gtp4o_BOLT** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: BASELINE_gtp4o_BOLT
## Training Configuration
{"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__D_SFT_C_BASELINE_gtp4o_BOLT_sft_data__sft_train", "template": "qwen", "cutoff_len": 16384, "max_samples": 1000000, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/BASELINE_gpt4o_BOLT/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__BASELINE_gtp4o_BOLT__v1", "sf_eval_before_training": false, "sf_wandb_project": "BASELINE_gtp4o_BOLT_sft", "sf_eval_steps": null, "run_name": "BASELINE_gtp4o_BOLT_sft"}
## Experiment Tracking
🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__BASELINE_gtp4o_BOLT__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-BASELINE_gtp4o_BOLT-sft")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-BASELINE_gtp4o_BOLT-sft")
```
|
lihuamao111/Qwen3-0.6B-Gensyn-Swarm-flexible_powerful_grasshopper
|
lihuamao111
| 2025-09-22T12:02:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am flexible_powerful_grasshopper",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T12:02:06Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am flexible_powerful_grasshopper
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758542346
|
poolkiltzn
| 2025-09-22T12:00:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T12:00:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yhytoto12/BeDLM-1B
|
yhytoto12
| 2025-09-22T12:00:25Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"LLM",
"Spoken Dialogue Generation",
"Conversational Behavior",
"en",
"dataset:yhytoto12/behavior-sd",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T20:00:07Z |
---
library_name: transformers
tags:
- LLM
- Spoken Dialogue Generation
- Conversational Behavior
license: llama3.2
datasets:
- yhytoto12/behavior-sd
language:
- en
base_model:
- meta-llama/Llama-3.2-1B
pipeline_tag: text-generation
---
# 🎙️ Behavior-SD
Official repository for our **NAACL 2025** paper:
<a href="https://aclanthology.org/2025.naacl-long.484/"><b>Behavior-SD: Behaviorally Aware Spoken Dialogue Generation with Large Language Models</b></a>
[Sehun Lee*](https://yhytoto12.github.io/), [Kang-wook Kim*](https://kwkim.me/), [Gunhee Kim](https://vision.snu.ac.kr/gunhee/) (* Equal contribution)
> 🏆 **SAC Award Winner** in Speech Processing and Spoken Language Understanding
## 🔗 Links
- 🌐 [**Project Page**](https://yhytoto12.github.io/Behavior-SD)
- 🤗 [**Dataset**](https://huggingface.co/datasets/yhytoto12/behavior-sd)
- 🤖 [**Model**](https://huggingface.co/yhytoto12/BeDLM-1B)
- 📄 [**Paper**](https://aclanthology.org/2025.naacl-long.484/)
## 💥 Updates
- `2025-09-22`: Released the 🤗 [BeDLM](https://huggingface.co/yhytoto12/BeDLM-1B) and its streamlit demo.
- `2025-04-27`: Released the 🤗 [Behavior-SD](https://huggingface.co/datasets/yhytoto12/behavior-sd) dataset.
## 📖 Overview
We explore how to generate natural, behaviorally-rich full-duplex spoken dialogues using large language models (LLMs).
We introduce:
- **Behavior-SD** Dataset: 108K full-duplex dialogues (2,164 hours) with rich speaker-wise behavioral annotations.
- **BeDLM**: A novel end-to-end LLM-based spoken dialogue generator conditioned on narrative and behavioral traits.
<p align="center">
<img src="assets/Behavior-SD.png" width="90%">
</p>
Unlike existing spoken dialogue datasets that neglect full-duplex dynamics (e.g., interruptions, backchannels), Behavior-SD captures and models realistic conversational behaviors, enabling more natural and human-like spoken dialogues.
## 📂 Dataset
Behavior-SD provides large-scale, behavior-annotated spoken dialogues.
- Download from huggingface
```python
from datasets import load_dataset
# Load the Behavior-SD dataset using streaming mode (recommended for large datasets)
dataset = load_dataset(
"yhytoto12/behavior-sd",
split="train", # "validation" or "test"
streaming=True
)
# Example: Iterate over the dataset
for i, example in enumerate(dataset):
print(example)
break
```
- Data Structure
```JSON
{
"soda_split": "train",
"soda_index": 4,
"narrative": "Cornell knows what Dontrell is thinking...",
"speakers": ["Cornell", "Dontrell"],
"behaviors": [
{"utterance_length": 0, "filler_words": 0, "backchannels": 0, "interruptions": 2},
{"utterance_length": 0, "filler_words": 2, "backchannels": 0, "interruptions": 0}
],
"num_turns": 10,
"utterances": [
{
"uttr_idx": 0,
"uttr_type": null,
"speaker_idx": 1,
"speaker": "Dontrell",
"tts_text": "So, I was thinking... um... we should probably plan...",
"dur_samples": 60672,
"start_time": 0.0,
"end_time": 2.75156462585034
},
...
],
"tts_speaker_ids": ["0001024622_0", "0000805189_1"],
"tts_genders": ["female", "male"],
"statistics": {
"num_utterances": [5, 5],
"num_turntaking": [5, 4],
"durations": [5.53, 25.35],
"num_interruptions": [2, 0],
"num_backchannels": [0, 0],
"num_filler_words": [0, 8]
}
}
```
Behavior annotations are provided at utterance and speaker levels, enabling fine-grained control and analysis.
## 🤖 BeDLM
We introduce BeDLM, a novel LLM-based spoken dialogue generator that produces behaviorally rich dialogues conditioned on narrative and speaker behaviors.
<p align="center">
<img src="assets/BeDLM.png" width="90%">
</p>
- Our pretrained BeDLM can be found at 🤗 [Hugging Face](https://huggingface.co/yhytoto12/BeDLM-1B).
- The model is Llama3.2-1B fine-tuned on Behavior-SD.
- The vocoders can be found at [Google Drive](https://drive.google.com/drive/folders/1jtEFBbte3W1JMLL-_22nRhdueG5gIL-k?usp=sharing).
### 🚀 Streamlit Demo
```bash
conda create -n BeDLM python=3.10
conda activate BeDLM
pip install -r requirements.txt
# Download the pretrained vocoder model and place it in `ckpts/vocoders`
mkdir -p ckpts/vocoders
# Run the demo
streamlit run demo.py
```
## 📌 Citation
If you find our work useful, please consider citing us:
```bib
@inproceedings{lee-and-kim@behaviorsd,
title = {Behavior-SD: Behaviorally Aware Spoken Dialogue Generation with Large Language Models},
author = {Sehun Lee, Kang-wook Kim, Gunhee Kim},
booktitle = {Proceedings of the 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics},
year = {2025},
url = {https://aclanthology.org/2025.naacl-long.484/}
}
```
|
piki-eth/Smoothie-Qwen3-1.7B-Gensyn-Swarm-enormous_peaceful_ibis
|
piki-eth
| 2025-09-22T11:52:42Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am enormous_peaceful_ibis",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T09:37:23Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am enormous_peaceful_ibis
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xyy121214/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stalking_tall_hornet
|
xyy121214
| 2025-09-22T11:51:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am stalking_tall_hornet",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T06:46:38Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am stalking_tall_hornet
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
afiyarah/embedding-ins-make
|
afiyarah
| 2025-09-22T11:49:40Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"gemma3_text",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:9431",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:google/embeddinggemma-300m",
"base_model:finetune:google/embeddinggemma-300m",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-22T11:49:20Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:9431
- loss:CosineSimilarityLoss
base_model: google/embeddinggemma-300m
widget:
- source_sentence: 'In the car insurance domain, represent this car make entity in
arabic for entity similarity matching: دي سوتو'
sentences:
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: هايتسو'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: ربيلكااوبرا'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: ديهاتسو'
- source_sentence: 'In the car insurance domain, represent this car make entity in
arabic for entity similarity matching: سي آر إس'
sentences:
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: داسيا'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: كاوساكي'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: كيوتي'
- source_sentence: 'In the car insurance domain, represent this car make entity in
arabic for entity similarity matching: آمي'
sentences:
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: كراز'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: سي ام سي دي'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: شميت'
- source_sentence: 'In the car insurance domain, represent this car make entity in
english for entity similarity matching: checker'
sentences:
- 'In the car insurance domain, represent this car make entity in english for entity
similarity matching: tiger'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: جاك'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: فوسو'
- source_sentence: 'In the car insurance domain, represent this car make entity in
arabic for entity similarity matching: جي إي سي'
sentences:
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: ايدزل'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: واكر'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: سالك'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on google/embeddinggemma-300m
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: insurance val
type: insurance-val
metrics:
- type: pearson_cosine
value: 0.8319304484319612
name: Pearson Cosine
- type: spearman_cosine
value: 0.6431780348935766
name: Spearman Cosine
---
# SentenceTransformer based on google/embeddinggemma-300m
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) <!-- at revision c5cfa06e5e282a820e85d57f7fb053207494f41d -->
- **Maximum Sequence Length:** 2048 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 2048, 'do_lower_case': False, 'architecture': 'Gemma3TextModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 3072, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Dense({'in_features': 3072, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(4): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
queries = [
"In the car insurance domain, represent this car make entity in arabic for entity similarity matching: \u062c\u064a \u0625\u064a \u0633\u064a",
]
documents = [
'In the car insurance domain, represent this car make entity in arabic for entity similarity matching: سالك',
'In the car insurance domain, represent this car make entity in arabic for entity similarity matching: ايدزل',
'In the car insurance domain, represent this car make entity in arabic for entity similarity matching: واكر',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.5667, 0.5606, 0.5776]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `insurance-val`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8319 |
| **spearman_cosine** | **0.6432** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 9,431 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 21 tokens</li><li>mean: 23.43 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 22.97 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 0.1</li><li>mean: 0.28</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:-------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: إل تي إم جي</code> | <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: بوماج</code> | <code>0.19999999999999998</code> |
| <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: يو دي</code> | <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: لادا</code> | <code>0.19999999999999998</code> |
| <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: إنساين</code> | <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: شانسي</code> | <code>0.4</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | insurance-val_spearman_cosine |
|:------:|:----:|:-------------:|:-----------------------------:|
| 0.0983 | 58 | - | 0.4972 |
| 0.1966 | 116 | - | 0.5621 |
| 0.2949 | 174 | - | 0.5636 |
| 0.3932 | 232 | - | 0.5194 |
| 0.4915 | 290 | - | 0.6253 |
| 0.5898 | 348 | - | 0.6236 |
| 0.6881 | 406 | - | 0.5702 |
| 0.7864 | 464 | - | 0.6208 |
| 0.8475 | 500 | 0.0209 | - |
| 0.8847 | 522 | - | 0.6018 |
| 0.9831 | 580 | - | 0.5994 |
| 1.0 | 590 | - | 0.6048 |
| 1.0814 | 638 | - | 0.6002 |
| 1.1797 | 696 | - | 0.6083 |
| 1.2780 | 754 | - | 0.5940 |
| 1.3763 | 812 | - | 0.6044 |
| 1.4746 | 870 | - | 0.6248 |
| 1.5729 | 928 | - | 0.6432 |
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.56.1
- PyTorch: 2.8.0+cu126
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
its-zion-18/music-text-distilbert-predictor
|
its-zion-18
| 2025-09-22T11:42:19Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:samder03/2025-24679-text-dataset",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-20T19:04:44Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: music-text-distilbert-predictor
results: []
datasets:
- samder03/2025-24679-text-dataset
---
# DistilBERT-based Music Era Classifier
This repository contains a fine-tuned text classification model based on distilbert-base-uncased. The model is designed to classify short text descriptions of eras in classical music into one of four historical musical eras: 0, 1, 2, and 3.
# Model Architecture & Training
The model was trained using the Hugging Face Trainer API. It utilizes a distilbert-base-uncased pre-trained model with a classification head on top.
- Tokenizer: AutoTokenizer.from_pretrained("distilbert-base-uncased")
- Model: AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
- Training Arguments: Learning Rate: 2×10−5
- Epochs: 5
- Batch Size: 8
- Evaluation Strategy: Per epoch
- Metric: accuracy
- Optimizer: AdamW
# music-text-distilbert-predictor
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the samder03/2025-24679-text-dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0495
- Accuracy: 1.0
- F1: 1.0
- Precision: 1.0
- Recall: 1.0
## Limitations
This model's primary limitations are:
Numerical Labels: The model outputs a numerical label (0, 1, 2, or 3). An external lookup table is required to map these numbers to their corresponding musical era names.
Language & Casing: As the model is based on distilbert-base-uncased, it is designed for English-language text and does not differentiate between uppercase and lowercase letters. It will not work for other languages.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6387 | 1.0 | 80 | 0.5111 | 0.9563 | 0.9562 | 0.9574 | 0.9563 |
| 0.0833 | 2.0 | 160 | 0.1052 | 0.9812 | 0.9812 | 0.9814 | 0.9812 |
| 0.0221 | 3.0 | 240 | 0.0585 | 0.9812 | 0.9812 | 0.9814 | 0.9812 |
| 0.0122 | 4.0 | 320 | 0.0629 | 0.9812 | 0.9812 | 0.9814 | 0.9812 |
| 0.011 | 5.0 | 400 | 0.0614 | 0.9812 | 0.9812 | 0.9814 | 0.9812 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
# Potential Errors
There could be a problem with dataleakage because the accuracy is at 100%
Because the model has already been trained on the augmented data, which is just a
derivative of the original data, the original dataset isn't a true holdout set.
The model is essentially being tested on data that it has already seen and, in some cases, memorized.
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758541113
|
poolkiltzn
| 2025-09-22T11:40:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T11:39:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
davidoneil/decisoes-processos-tce-ft-gemma
|
davidoneil
| 2025-09-22T11:39:16Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"gemma3_text",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:49816",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:google/embeddinggemma-300m",
"base_model:finetune:google/embeddinggemma-300m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-22T11:36:17Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:49816
- loss:MultipleNegativesRankingLoss
base_model: google/embeddinggemma-300m
widget:
- source_sentence: transferência reserva servidor militar fundamento legal constitucional
procedimento revisão proventos
sentences:
- 'Este trecho apresenta a conclusão do Ministério Público de Contas sobre o processo,
avaliando se a análise se configura como monitoramento de decisão anterior e comentando
a suficiência das informações apresentadas pela Secretaria da Saúde para comprovar
o atendimento às determinações.. 3. O Ministério Público de Contas, no Parecer
Ministerial no 780/2020 GPMC (ev . 22), apresentou a seguinte conclusão: III CONCLUSÃO
Ante o exposto, tendo em vista a ausência de pressupostos mínimos no bojo do Acórdão
1143/2018 para que este seja objeto de monitoramento, concluiu-se que a presente
análise não se trata de um monitoramento propriamente dito, mas de uma verificação
quanto aos resultados de uma decisão expedida por essa Corte de Contas . A Unidade
Técnica demonstrou o impacto positivo com a implementação das recomendações exaradas,
especialmente no que tange à melhoria da estrutura física da Central de Medicamentos
de Alto Custo Juarez Barbosa, mas deixou de analisar o cumprimento das determinações
proferidas na decisão da TCE-GO . Embora a Secretaria da Saúde tenha apresentado
informações quanto ao cumprimento dos comandos exarados, não foram juntados aos
autos os documentos que comprovam as informações prestadas pela Secretaria da
Saúde. Assim, caso estivéssemos diante um monitoramento propriamente dito, o correto
Documento assinado eletrônicamente com fundamento da Resolução Normativa 12/2017
do TCE-GO, Art. 6o . Número do Processo: 201200047003401 . 2 3 seria a intimação
do Secretário da Saúde para que encaminhasse os documentos comprobatórios das
situações fáticas narradas'
- 'Este trecho é o início do documento e apresenta a identificação do processo de
transferência para reserva, os dados principais do servidor Wider Lonso Alves
da Silva, incluindo histórico de cargos e proventos, e a decisão final do Tribunal
de Contas que considerou os atos legais e determinou seu registro. Contém também
o início da seção do RELATÓRIO.Pág. 1 1 ÓRGÃO: Polícia Militar INTERESSADO: Wider
Lonso Alves da Silva ASSUNTO: 207-01-TRANSFERÊNCIA PARA RESERVA-CONCESSÃO RELATOR:
SAULO MARQUES MESQUITA AUDITOR: HELOISA HELENA ANTONACIO MONTEIRO GODINHO PROCURADOR:
MAÍSA DE CASTRO SOUSA Vistos, oralmente expostos e discutidos os presentes Autos
201700002001949/207-01, referentes aos seguintes atos: Servidor(a Wider Lonso
Alves da Silva . Admissão: Soldado. Data: 1o de novembro de 1989. Transferência
para a reserva: Subtenente. Data: 07 de dezembro de 2017 Revisão: 2o Tenente PM.
Data: 11 de junho de 2019. Órgão: Polícia Militar do Estado de Goiás. Fundamento
legal: Art. 42, 1o da Constituição Federal e art. 100, 12, I e II, e 13 da Constituição
Estadual. Proventos: calculados em 21 de março de 2020, no valor anual de 164.052,98
. Tendo o relatório e o voto como partes integrantes deste, ACORDA o TRIBUNAL
DE CONTAS DO ESTADO DE GOIÁS, pelos votos dos integrantes de sua Primeira Câmara,
ante as razões expostas pelo Relator, em considerar legais os referidos atos,
determinando seu registro, nos termos da Lei Orgânica e Regimento Interno deste
Tribunal, para todos os fins legais . À Secretaria Geral, para as providências
a seu cargo. TRIBUNAL DE CONTAS DO ESTADO DE GOIÁS, em Goiânia aos Acórdão No:
198/2022Acórdão No: 198/2022 TRIBUNAL DE CONTAS DO ESTADO DE GOIÁS Processo no
201700002001949- Pág. 1 1 RELATÓRIO No 2613/2021 GCSM'
- 'Este trecho conclui a seção de relatório do processo administrativo no Tribunal
de Contas de Goiás, detalhando as manifestações dos órgãos técnicos sobre a legalidade
dos atos de pessoal e a avocação do processo pelo Relator. Na sequência, inicia
a seção de voto, onde o Relator expõe a competência da Corte para apreciar e registrar
atos de admissão e aposentadoria e inicia a análise da legalidade desses atos
no caso específico..320,00 (um mil, trezentos e vinte reais) (Evento 37). 4 .
No âmbito desta Corte de Contas, após não ter sido identificado qualquer registro
prévio em nome da servidora (Evento 43), o Serviço de Fiscalização de Atos de
Pessoal I e o Ministério Público de Contas se manifestaram pela legalidade dos
atos de admissão e aposentadoria (Eventos 46 e 47) . Quanto à participação do
Conselheiro Substituto, conquanto tenha sido oportunizada sua manifestação, o
processo foi avocado por esta Relatoria por descumprimento do prazo regimental
(artigo 171 do RITCE-GO). 5. É o Relatório. Passo ao VOTO. Documento assinado
eletrônicamente com fundamento da Resolução Normativa 12/2017 do TCE-GO, Art.
6o. Número do Processo: 202200006027618 . 2 4 6 . Compete ao Controle Externo,
dentre outras atribuições ao seu cargo, a apreciação, para fins de registro, da
legalidade dos atos de admissão de pessoal, bem como das concessões de aposentadorias,
reformas e pensões, ressalvadas as melhorias posteriores que não alterem o fundamento
legal do ato concessório, consoante mandamento constitucional insculpido no artigo
71, inciso III , da Constituição Federal de 1988, bem como artigo 1o, incisos
III e IV, da Lei Orgânica deste Tribunal de Contas . 7. Em relação à admissão,
considerando que a aposentadoria pressupõe o registro prévio do ingresso, a Resolução
Normativa 003/2005 desta Corte sugere que, identificados a necessidade e os elementos
suficientes, seja promovido o registro concomitante dos atos. 8'
- source_sentence: dispensa licitação emergencial fundamento legal voto relator decisão
tribunal
sentences:
- 'Este trecho é a parte inicial do Acórdão (decisão formal) do Tribunal de Contas
do Estado de Goiás, registrando a análise de legalidade e a determinação de registro
do ato de aposentadoria (e admissão) de uma Docente da Universidade Estadual de
Goiás (UEG), mencionando a base legal e detalhando o cálculo dos proventos iniciais.Pág.
1 ACÓRDÃO Aposentadoria da Sra. Maria de Fátima Oliveira. Art. 4o, incisos I a
V, 1o, 2o e 6o, inciso I, EC 103/2019, o art. da Constituição Estadual e o art.
71 da Lei Complementar Estadual no 161/2020. Análise conjunta: admissão submissão
ao concurso público. Legalidade. Registro dos atos . VISTOS, oralmente expostos
e discutidos os presentes autos, de no 202200020022966/204-01, que tratam da análise
da legalidade, para fins de registro, do ato concessivo de aposentadoria à Sra
. Maria de Fátima Oliveira, no cargo de Docente de Ensino Superior Pós-Doutor,
DES V, Nível 3, do Quadro da Carreira dos Docentes de Ensino Superior da Universidade
Estadual de Goiás UEG, perfazendo os proventos a quantia anual e integral de 362.936,88
(trezentos e sessenta e dois mil novecentos e trinta e seis reais e oitenta e
oito centavos), compostos de: Vencimento 302 .447,40 (trezentos e dois mil quatrocentos
e quarenta e sete reais e quarenta centavos) e Gratificação Adicional referente
a 4 (quatro) quinquênios (20%) 60'
- 'Este trecho é o "RELATÓRIO" do processo, apresentando o histórico do caso de
transferência para a reserva de Luís Carlos Gomes. Ele detalha os atos administrativos
realizados pela Polícia Militar e GOIASPREV, incluindo a promoção e a concessão
da reserva remunerada com os respectivos proventos, e resume as manifestações
iniciais dos setores técnicos e do Ministério Público de Contas sobre a legalidade
do ato.. TRIBUNAL DE CONTAS DO ESTADO DE GOIÁS, Goiânia, Acórdão No: 5214/2021Acórdão
No: 5214/2021 TRIBUNAL DE CONTAS DO ESTADO DE GOIÁS Processo no 202000002051346-
Pág. 1 3 RELATÓRIO No 1058/2021 GCCR. 1. Tratam os autos de registro de Transferência
para a Reserva Remunerada em favor de Luis Carlos Gomes, RG no 21.264 PMGO, no
posto de Subtenente, dos Quadros da Polícia Militar do Estado de Goiás. 2 . A
Polícia Militar, por meio da Portaria no 13.502/2020 PM, de 23/06/2020 (Evento
19), da lavra do Comandante-Geral da PM, promoveu o interessado à Graduação Subtenente
PM, em razão de contar com mais de 30 (trinta) anos de serviço. Em seguida, pela
Portaria no 1668-GOIASPREV, de 26/06/2020 (Evento 23), houve a concessão da Transferência
para a Reserva Remunerada . A apostila (Evento 31) fixou os proventos anuais em
142.237,55 (Cento e quarenta e dois mil, duzentos e trinta e sete reais e cinquenta
e cinco centavos). 3. No âmbito desta Corte de Contas, o Serviço de Registro informou
que não foi encontrado registro algum em nome do interessado (Evento 36) . O Serviço
de Registro de Atos de Pessoal e a Auditoria designada (Eventos 39 e 42) manifestaram-
se pela legalidade da admissão e da transferência para a reserva e sugeriram o
registro dos respectivos atos de forma concomitante . O Ministério Público de
Contas (Evento 40), por sua vez, posicionou-se pela negativa do registro de admissão
de transferência para reserva, em razão da afronta a normas legais e constitucionais.
4'
- 'Este trecho contém o Voto formal do Relator, localizado ao final de seu relatório,
no qual ele se manifesta pela legalidade da dispensa de licitação analisada e
propõe as determinações, recomendações e ressalvas que fundamentam a decisão do
Tribunal.. Face ao exposto, VOTO pela legalidade do Ato de Dispensa emergencial
de Licitação n. 008/2019, com a expedição das determinações e recomendações propostas,
cientificando o jurisdicionado de que, apesar de a dispensa de licitação também
se mostrar possível quando a situação de emergência decorrer da falta de . Número
do Processo: 201900047000699 . 6 6 planejamento, da desídia administrativa ou
da má gestão dos recursos púbicos, tal circunstância não afasta a responsabilidade
do gestor pela não realização da licitação em momento oportuno. Goiânia, 18 de
outubro de 2021. SAULO MARQUES MESQUITA Conselheiro GCSM/RNA . Número do Processo:
201900047000699 . Número do Processo: 201900047000699'
- source_sentence: prestação contas metrobus 2019 recomendação técnica jurídica fundamento
legal lotce
sentences:
- 'Este trecho apresenta o Acórdão do Tribunal de Contas do Estado de Goiás que
considerou legal o ato de concessão de pensão e determinou seu registro, seguido
pelo início do relatório detalhado sobre o processo.Pág. 1 1 ÓRGÃO: Goias Previdencia
INTERESSADO: Vilson de Souza ASSUNTO: 205-01-PENSÃO-CONCESSÃO RELATOR: SAULO MARQUES
MESQUITA AUDITOR: FLÁVIO LÚCIO RODRIGUES DA SILVA PROCURADOR: CARLOS GUSTAVO SILVA
RODRIGUES Vistos, oralmente expostos e discutidos os presentes Autos 202011129002504/205-01,
referentes ao seguinte ato de pensão: Servidor(a): Genoveva Lopes Felipe de Souza
. Cargo: Professor I, Referência Órgão: Secretaria de Estado da Educação. Óbito:
26 de abril de 2020. Beneficiário (a): Vilson de Souza. Data de início: 26 de
abril de 2020. Fundamento legal: Lei Complementar n. 77/2010. Pensão: calculada
em 10 de junho de 2020, no valor mensal de 1.853,29 . Tendo o relatório e o voto
como partes integrantes deste, ACORDA o TRIBUNAL DE CONTAS DO ESTADO DE GOIÁS,
pelos votos dos integrantes de sua Primeira Câmara, ante as razões expostas pelo
Relator, em considerar legal o referido ato, determinando seu registro, nos termos
da Lei Orgânica e Regimento Interno deste Tribunal, para todos os fins legais
. À Secretaria Geral, para as providências a seu cargo. TRIBUNAL DE CONTAS DO
ESTADO DE GOIÁS, em Goiânia aos Acórdão No: 4011/2021Acórdão No: 4011/2021 TRIBUNAL
DE CONTAS DO ESTADO DE GOIÁS Processo no 202011129002504- Pág. 1 1 RELATÓRIO No
1683/2021 GCSM . RELATÓRIO Natureza Pensão Servidor(a) Genoveva Lopes Felipe de
Souza Cargo Professor I, Referência Órgão Secretaria de Estado da Educação Fundamento
legal Lei Complementar Estadual n'
- 'Este trecho conclui a seção de RELATÓRIO E VOTO do Acórdão, apresentando a análise
final da Conselheira Relatora. Ele detalha a conformidade do processo de concessão
de pensão com a legislação, menciona as manifestações favoráveis das unidades
técnicas do Tribunal de Contas e formaliza o voto pela legalidade e registro do
ato de pensão.. Constata-se que o processo foi devidamente instruído, observando-se
conformidade com a legislação aplicável, não havendo, portanto, impedimentos para
o registro do ato. Os documentos apresentados nos autos atendem de forma satisfatória
aos requisitos exigidos pelo art. 3o, 3o, da deste Tribunal . Diante do exposto,
e considerando as manifestações favoráveis da Unidade Técnica, do Ministério Público
de Contas e da Auditoria, VOTO pelo registro do ato de pensão. Goiânia, 03 de
fevereiro de 2025. CARLA CINTIA SANTILLO Conselheira Relatora Documento assinado
eletrônicamente com fundamento da Resolução Normativa 12/2017 do TCE-GO, Art.
6o. Número do Processo: 202411129002425 . 6o . Número do Processo: 202411129002425
. 6o, inc. I – login e senha Resolução Normativa no 002/2001'
- 'Este trecho finaliza o Voto da Conselheira Relatora no processo de Prestação
de Contas Anual da Metrobus referente a 2019. Ele conclui a recomendação técnica/jurídica,
faz referência a dispositivos legais relevantes para ressalvas, e é seguido pela
data, identificação da Relatora e detalhes técnicos da assinatura eletrônica do
documento.. 71 da LOTCE-GO. Goiânia, 22 de outubro de 2021. CARLA CINTIA SANTILLO
Conselheira Documento assinado eletrônicamente com fundamento da Resolução Normativa
12/2017 do TCE-GO, Art. 6o. Número do Processo: 202000047002720 . 6o. Número do
Processo: 202000047002720 . 6o, inc . I – login e senha'
- source_sentence: tomada de contas especial contas iliquidáveis longo prazo TCU jurisprudência acórdão
921/2009 ampla defesa
sentences:
- Localizado no início do Relatório, este segmento descreve a origem do processo
de fiscalização sobre acúmulo de cargos, as ações administrativas tomadas e as
sugestões iniciais dos órgãos envolvidos na apuração dos fatos.. 1 4 RELATÓRIO
No 815/2021 GCCR. 1 . Cuidam os autos de Relatório de Auditoria de Conformidade
realizada pela Controladoria-Geral do Estado junto à Secretaria de Estado da Saúde,
para apurar supostas irregularidades em relação ao acúmulo de cargos públicos
ocupados pelo Sr. Gilson Reginaldo. 2 . O Serviço de Fiscalização de Atos de Pessoal,
via Instrução Técnica 43/2016, pugnou pela citação da SES para fins de apresentação
das medidas corretivas adotadas em face dos fiscalizados (Evento 1, p. 25/28).
3. Após citação, a SES/GO (Evento 1, p. 39/40) informou que foi instaurado processo
administrativo disciplinar em face do Sr . Gilson Reginaldo e Ana Lázara Azara
Rodrigues, juntando a documentação pertinente (Evento 1, p. 41/42). 4. Após diligências,
o Serviço de Fiscalização de Atos de Pessoal, por meio da Instrução Técnica Conclusiva
47/2017 (Evento 4, p . 79/82), sugeriu a absolvição da Ana Maria Azara Rodrigues,
então Gerente da Unidade de Saúde, suposta responsável pela omissão na fiscalização
da frequência do servidor, e a extinção do feito relativamente ao Sr. Gilson Reginaldo,
ambos com base no resultado do julgamento do processo administrativo disciplinar,
concluindo pelo seu arquivamento. 5 . O membro do Ministério Público de Contas,
por meio do Parecer no 890/2017 (Evento 4, p. 92), opinou pela conversão dos autos
em Tomada de Contas Especial, com fulcro no art
- 'Contextualização: No Voto do Relator, justifica-se a decisão de considerar as
contas iliquidáveis e arquivar o processo de Tomada de Contas Especial devido
ao extenso lapso temporal desde os fatos. O trecho apresenta a base legal e jurisprudencial
para essa decisão, citando a prática e entendimentos do Tribunal de Contas da
União (TCU) sobre a dificuldade de defesa e a racionalização administrativa em
casos de grande demora.. 6o, II, Instrução Normativa no 71/2012 TCU) . Segundo
bem ponderou o Serviço de Contas do Governo Supervisão I, "Vale frisar, na hipótese
em apreço, que quase 10 anos se passaram desde a ocorrência dos fatos ensejadores
desta tomada de contas especial e a essa altura é complicado, senão impossível,
reunir todos os elementos de provas necessários à instrução do processo, nem seria
razoável exigir, depois do longo período de tempo decorrido , que os responsáveis
tivessem acesso aos documentos a serem usados para subsidiar sua defesa ." Confirmando
o entendimento supracitado, expõe-se a ementa do Acórdão no 921/2009-TCU: TOMADA
DE CONTAS ESPECIAL. OMISSÃO NO DEVER DE PRESTAR CONTAS. CITAÇÃO. REVELIA. CONTAS
IRREGULARES. DÉBITO. RECURSO DE REVISÃO. CONHECIMENTO. PROVIMENTO. COMPROMETIMENTO
DA AMPLA DEFESA PELO LONGO DECURSO DE PRAZO. CONTAS ILIQUIDÁVEIS. TRANCAMENTO
DAS CONTAS. 1 . Consideram-se iliquidáveis as contas, ordenando-se o seu trancamento,
em razão da impossibilidade do exercício de ampla defesa, pelo longo decurso de
tempo entre a prática do ato e a citação do responsável." Acórdão 921/2009 TCU
Plenário. Relator: Ministro Raimundo Carreiro. Data de Julgamento: 6/5/2009. (grifo
nosso)'
- 'Após apresentar a regra da Lei nº 8.080/90 que proíbe o pagamento pelo SUS de
produtos não autorizados pela Anvisa, este trecho detalha as exceções a essa regra
(Parágrafo único) e introduz a análise de um estudo técnico que interpreta a aplicação
dessas normas, especialmente quanto a medicamentos genéricos e requisitos técnicos
vs. bula, levando a uma discussão sobre a harmonização legal.. Parágrafo único
. Excetuam-se do disposto neste artigo: I medicamento e produto em que a indicação
de uso seja distinta daquela aprovada no registro na Anvisa, desde que seu uso
tenha sido recomendado pela Comissão Nacional de Incorporação de Tecnologias no
Sistema Único de Saúde (Conitec), demonstradas as evidências científicas sobre
a eficácia, a acurácia, a efetividade e a segurança , e esteja padronizado em
protocolo estabelecido pelo Ministério da Saúde; II medicamento e produto recomendados
pela Conitec e adquiridos por intermédio de organismos multilaterais internacionais,
para uso em programas de saúde pública do Documento assinado eletrônicamente com
fundamento da Resolução Normativa 12/2017 do TCE-GO, Art . 6o. Número do Processo:
202400047004621 . 19 27 Ministério da Saúde e suas entidades vinculadas, nos termos
do 5o do art. 8o da Lei no 9.782, de 26 de janeiro de 1999 . (grifamos) Nesses
termos, o Serviço de Fiscalização de Licitações realiza um relevante estudo sobre
a matéria e conclui que a exigência da prescrição em bula poderia ser evitado
em favor da concorrência, pois os medicamentos genéricos possuem autorização para
venda da ANVISA no Brasil Com efeito, verifica-se que, salvo nos casos excepcionais
previstos no parágrafo único, o SUS não pode pagar , ressarcir ou reembolsar medicamento,
produto ou procedimento de uso não autorizado pela Anvisa . Conquanto haja suposta
incompatibilidade entre as previsões da Lei Federal no 9.787/99 e da Lei Federal
no 8'
- source_sentence: voto legalidade pensão servidor professor nível estadual lei complementar
772010 manifestações favoráveis unidades técnicas procuradoria auditoria resolução
222008 artigo 46 inciso x processo 201911129003895 justificativa dispensada goiania
agosto 2021 fundamentação relatorio gcsmr
sentences:
- 'Este trecho contém o voto do relator, que fundamenta a decisão pela legalidade
da pensão, e um resumo detalhado dos elementos do processo, como as partes envolvidas,
o cargo, o fundamento legal e as manifestações favoráveis das unidades técnicas..
1 1 RELATÓRIO No 1967/2021 GCSM . VOTO Tendo em vista que há uniformidade nas
manifestações da unidade técnica, da Auditoria e da Procuradoria-Geral de Contas,
fica dispensada a formalização da justificativa do presente voto, eis que adoto
igual entendimento, nos termos do artigo 46, inciso X, da Resolução n. 22/2008.
Face ao exposto, VOTO pelo registro da pensão versada nos presentes autos. Goiânia,
16 de agosto de 2021 . SAULO MARQUES MESQUITA Conselheiro GCSM/NRF RELATÓRIO Natureza
Pensão Servidor(a) Lindanita Neves Salgado Cargo Professor Nível Órgão Secretaria
de Estado da Educação Fundamento legal Lei Complementar Estadual n. 77/2010 Beneficiário(s)
Antônio Eustáquio Salgado, viúvo. Unidade Técnica Favorável Procuradoria de Contas
Favorável Auditoria Favorável . Número do Processo: 201911129003895 . Número do
Processo: 201911129003895'
- Este trecho compreende a maior parte do Voto do Relator, onde ele detalha sua
análise jurídica e técnica sobre a legalidade dos atos de admissão e aposentadoria
da servidora, responde a questões levantadas por outras áreas do Tribunal, como
a Procuradoria de Contas e a Auditoria, e fundamenta sua recomendação final pelo
registro dos referidos atos.. 16.168/07, nos termos da Lei n. 19.638/17. A admissão
ocorreu mediante concurso público, merecendo registro. Para tal fim, mostram-se
suficientes os documentos acostados no Evento 6, pág. 1 . Por sua vez, quanto
à aposentadoria, o registro deve ser admitido, tendo em vista que restaram atendidas
as disposições da Emenda Constitucional n 41/2003 . No que diz respeito à alegação
ministerial referente ao provimento derivado, não se constitui em óbice ao registro
da aposentadoria, haja vista a pacífica jurisprudência desta Corte, escorada no
princípio da segurança jurídica, invocando-se como precedentes os fundamentos
lançados nos autos n. 200800066003794 e n. 201000066005561 . A respeito da fixação
dos proventos, a demonstração da composição contida nos autos encontra-se compatível
com o fundamento legal do ato de jubilamento e a legislação aplicável (Evento
23). Com efeito, demonstrados os fundamentos jurídicos do ato em tela, resta concluir
pela legalidade da aposentadoria . A documentação acostada aos autos supre com
eficiência a finalidade das exigências contidas no artigo 3o, 1o e 2o, da , desta
Corte. Quanto à multa sugerida pela Auditoria, não se mostra razoável sua aplicação,
uma vez que o atraso no envio dos documentos não causou qualquer prejuízo à atuação
do Controle Externo, justificando-se pela burocracia relacionada à tramitação
processual . Face ao exposto, VOTO pelo registro dos atos de admissão e aposentadoria.
Goiânia, 05 de outubro de 2021
- 'Este trecho transita do Acórdão para o Relatório e Voto, detalhando o processo
de aposentadoria voluntária de Ana Maria de Souza Marmori, incluindo a descrição
do ato, os pareceres técnicos favoráveis e a introdução da discussão sobre a competência
do Tribunal.. TRIBUNAL DE CONTAS DO ESTADO DE GOIÁS, em Goiânia Acórdão No: 4804/2024Acórdão
No: 4804/2024 TRIBUNAL DE CONTAS DO ESTADO DE GOIÁS Processo no 202300017000030-
Pág . 1 2 RELATÓRIO E VOTO No 784/2024 GCCS PROCESSO No: 202300017000030 ASSUNTO:
APOSENTADORIA CONCESSÃO ORIGEM: SECRETARIA DE ESTADO DA ADMINISTRAÇÃO INTERESSADA:
ANA MARIA DE SOUZA MARMORI Trata-se de ato de aposentadoria voluntária, em nome
de ANA MARIA DE SOUZA MARMORI, no cargo de Auxiliar de Gestão Administrativa,
Classe Padrão do Grupo Ocupacional de mesmo nome, do Quadro Permanente de Pessoal
, do órgão Secretaria de Estado da Administração, submetido, para fins de registro,
à apreciação do Tribunal de Contas do Estado de Goiás, encaminhado a esta Corte
de Contas para os fins do artigo 26, III, da Constituição do Estado de Goiás,
art . 1o, inciso IV, da Lei no 16.168/2007 (Lei Orgânica do TCE-GO). Encaminhados
os autos a esta Corte de Contas, o Serviço de Registro informou que foi encontrado
registro de Admissão em nome da interessada. Em seguida, a Unidade Técnica, o
Ministério Público de Contas e a Auditoria manifestaram-se pela legalidade e registro
do ato concessório de aposentadoria. É o relatório. Passo ao voto . A competência
do Tribunal de Contas para registro do ato em apreço tem amparo no artigo 1o,
inciso IV, da Lei no 16.168/07 artigo 26, inciso III, da Constituição do Estado
de Goiás'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on google/embeddinggemma-300m
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) <!-- at revision c5cfa06e5e282a820e85d57f7fb053207494f41d -->
- **Maximum Sequence Length:** 2048 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 2048, 'do_lower_case': False, 'architecture': 'Gemma3TextModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 3072, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Dense({'in_features': 3072, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(4): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
queries = [
"voto legalidade pens\u00e3o servidor professor n\u00edvel estadual lei complementar 772010 manifesta\u00e7\u00f5es favor\u00e1veis unidades t\u00e9cnicas procuradoria auditoria resolu\u00e7\u00e3o 222008 artigo 46 inciso x processo 201911129003895 justificativa dispensada goiania agosto 2021 fundamenta\u00e7\u00e3o relatorio gcsmr",
]
documents = [
'Este trecho contém o voto do relator, que fundamenta a decisão pela legalidade da pensão, e um resumo detalhado dos elementos do processo, como as partes envolvidas, o cargo, o fundamento legal e as manifestações favoráveis das unidades técnicas.. 1 1 RELATÓRIO No 1967/2021 GCSM . VOTO Tendo em vista que há uniformidade nas manifestações da unidade técnica, da Auditoria e da Procuradoria-Geral de Contas, fica dispensada a formalização da justificativa do presente voto, eis que adoto igual entendimento, nos termos do artigo 46, inciso X, da Resolução n. 22/2008. Face ao exposto, VOTO pelo registro da pensão versada nos presentes autos. Goiânia, 16 de agosto de 2021 . SAULO MARQUES MESQUITA Conselheiro GCSM/NRF RELATÓRIO Natureza Pensão Servidor(a) Lindanita Neves Salgado Cargo Professor Nível Órgão Secretaria de Estado da Educação Fundamento legal Lei Complementar Estadual n. 77/2010 Beneficiário(s) Antônio Eustáquio Salgado, viúvo. Unidade Técnica Favorável Procuradoria de Contas Favorável Auditoria Favorável . Número do Processo: 201911129003895 . Número do Processo: 201911129003895',
'Este trecho transita do Acórdão para o Relatório e Voto, detalhando o processo de aposentadoria voluntária de Ana Maria de Souza Marmori, incluindo a descrição do ato, os pareceres técnicos favoráveis e a introdução da discussão sobre a competência do Tribunal.. TRIBUNAL DE CONTAS DO ESTADO DE GOIÁS, em Goiânia Acórdão No: 4804/2024Acórdão No: 4804/2024 TRIBUNAL DE CONTAS DO ESTADO DE GOIÁS Processo no 202300017000030- Pág . 1 2 RELATÓRIO E VOTO No 784/2024 GCCS PROCESSO No: 202300017000030 ASSUNTO: APOSENTADORIA CONCESSÃO ORIGEM: SECRETARIA DE ESTADO DA ADMINISTRAÇÃO INTERESSADA: ANA MARIA DE SOUZA MARMORI Trata-se de ato de aposentadoria voluntária, em nome de ANA MARIA DE SOUZA MARMORI, no cargo de Auxiliar de Gestão Administrativa, Classe Padrão do Grupo Ocupacional de mesmo nome, do Quadro Permanente de Pessoal , do órgão Secretaria de Estado da Administração, submetido, para fins de registro, à apreciação do Tribunal de Contas do Estado de Goiás, encaminhado a esta Corte de Contas para os fins do artigo 26, III, da Constituição do Estado de Goiás, art . 1o, inciso IV, da Lei no 16.168/2007 (Lei Orgânica do TCE-GO). Encaminhados os autos a esta Corte de Contas, o Serviço de Registro informou que foi encontrado registro de Admissão em nome da interessada. Em seguida, a Unidade Técnica, o Ministério Público de Contas e a Auditoria manifestaram-se pela legalidade e registro do ato concessório de aposentadoria. É o relatório. Passo ao voto . A competência do Tribunal de Contas para registro do ato em apreço tem amparo no artigo 1o, inciso IV, da Lei no 16.168/07 artigo 26, inciso III, da Constituição do Estado de Goiás',
'Este trecho compreende a maior parte do Voto do Relator, onde ele detalha sua análise jurídica e técnica sobre a legalidade dos atos de admissão e aposentadoria da servidora, responde a questões levantadas por outras áreas do Tribunal, como a Procuradoria de Contas e a Auditoria, e fundamenta sua recomendação final pelo registro dos referidos atos.. 16.168/07, nos termos da Lei n. 19.638/17. A admissão ocorreu mediante concurso público, merecendo registro. Para tal fim, mostram-se suficientes os documentos acostados no Evento 6, pág. 1 . Por sua vez, quanto à aposentadoria, o registro deve ser admitido, tendo em vista que restaram atendidas as disposições da Emenda Constitucional n 41/2003 . No que diz respeito à alegação ministerial referente ao provimento derivado, não se constitui em óbice ao registro da aposentadoria, haja vista a pacífica jurisprudência desta Corte, escorada no princípio da segurança jurídica, invocando-se como precedentes os fundamentos lançados nos autos n. 200800066003794 e n. 201000066005561 . A respeito da fixação dos proventos, a demonstração da composição contida nos autos encontra-se compatível com o fundamento legal do ato de jubilamento e a legislação aplicável (Evento 23). Com efeito, demonstrados os fundamentos jurídicos do ato em tela, resta concluir pela legalidade da aposentadoria . A documentação acostada aos autos supre com eficiência a finalidade das exigências contidas no artigo 3o, 1o e 2o, da , desta Corte. Quanto à multa sugerida pela Auditoria, não se mostra razoável sua aplicação, uma vez que o atraso no envio dos documentos não causou qualquer prejuízo à atuação do Controle Externo, justificando-se pela burocracia relacionada à tramitação processual . Face ao exposto, VOTO pelo registro dos atos de admissão e aposentadoria. Goiânia, 05 de outubro de 2021',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[ 0.9358, 0.0405, -0.4628]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 49,816 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 37.28 tokens</li><li>max: 190 tokens</li></ul> | <ul><li>min: 88 tokens</li><li>mean: 474.57 tokens</li><li>max: 2048 tokens</li></ul> | <ul><li>min: 100 tokens</li><li>mean: 471.19 tokens</li><li>max: 1484 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>revisão transferência reserva remunerada militar promoção ato bravura lei 16168/07 artigo 26 inciso iii constituição estado goias artigo 6 iii 9 lei 15704/2006 lei 18182/2013 procedimento análise legalidade registro ato militar inativo processo 201800003007957</code> | <code>Este trecho inicia a seção de Relatório e Voto do Acórdão, apresentando o caso de revisão de transferência para reserva do interessado, resumindo os registros encontrados, as manifestações favoráveis dos órgãos técnicos e citando a base legal para a análise e a promoção.. TRIBUNAL DE CONTAS DO ESTADO DE GOIÁS, em Goiânia, aos Acórdão No: 3307/2021Acórdão No: 3307/2021 TRIBUNAL DE CONTAS DO ESTADO DE GOIÁS Processo no 201800003007957- Pág. 1 1 RELATÓRIO E VOTO No 78/2021 GCCS PROCESSO No: 201800003007957 ASSUNTO: TRANSFERÊNCIA PARA RESERVA REVISÃO ORIGEM: POLÍCIA MILITAR INTERESSADO: ELIAS FERREIRA TOSTA 1 . Tratam os autos sobre revisão de transferência para reserva remunerada de ELIAS FERREIRA TOSTA, em virtude de promoção por ato de bravura ao posto de Coronel, da Polícia Militar do Estado de Goiás. 2 . Encaminhados os autos a esta Corte de Contas, o Serviço de Registro informou que foram encontrados os seguintes registros em nome do interessado: a) Contrato de Trabalho, a partir de ...</code> | <code>Conclusão do Voto do Relator com data, local e assinatura, formalizando a decisão sobre a legalidade dos atos de pessoal.. Goiânia, 12 de novembro de 2024 . CELMAR RECH Conselheiro Documento assinado eletrônicamente com fundamento da Resolução Normativa 12/2017 do TCE-GO, Art. 6o. Número do Processo: 202100006066896 . 6o. Número do Processo: 202100006066896 . 6o, inc. I – login e senha</code> |
| <code>legalidade admissão e transferência reserva policial militar goias processo 201900002085692</code> | <code>Este trecho finaliza o Voto do Conselheiro Relator, apresentando sua conclusão e recomendação pela legalidade e registro dos atos de admissão e transferência para a reserva do policial militar, conforme analisado no corpo do relatório e voto anteriores.. Saraiva, 8a ed., pág. 239). 16 . Neste contexto, ao teor de todo o exposto, devidamente instruídos estes autos, VOTO pela legalidade do registro, em nome de Varley Alves Viana, RG no 18 .667 PM-GO, dos atos de: admissão, na graduação de Soldado PM, a partir de 20/09/1986; e de Transferência para a Reserva, na graduação de Sargento PM, do Quadro da Polícia Militar do Estado de Goiás, com proventos integrais, nos termos da proposta de acórdão que ora submeto à deliberação deste Colegiado. Goiânia, 31 de maio de 2021. CELMAR RECH Conselheiro . Número do Processo: 201900002085692 . Número do Processo: 201900002085692</code> | <code>Este segmento inicia o relatório do processo de aposentadoria, detalhando o caso de Roberto Matias da Silva, incluindo histórico de serviço, tempo de contribuição e idade, além das etapas administrativas iniciais e cálculo dos proventos antes da análise do Tribunal.. À Secretaria Geral para as providências a seu cargo. TRIBUNAL DE CONTAS DO ESTADO DE GOIÁS, Goiânia, Acórdão No: 1206/2025Acórdão No: 1206/2025 TRIBUNAL DE CONTAS DO ESTADO DE GOIÁS Processo no 202300036013533- Pág. 1 3 RELATÓRIO No 97/2025 GCCR. 1 . Tratam os autos de aposentadoria em nome de Roberto Matias da Silva, no cargo de Assistente de Transportes e Obras, Classe Padrão "III", do Quadro Permanente dos Servidores Efetivos da Agência Goiana de Infraestrutura e Transportes, com fundamento no artigo 20, incisos I a IV, 2o, inciso I, da EC no 103/2019, artigo 72 da LC no 161/2020, e na EC Estadual no 65/2019. 2 . O vínculo com a Administração Pública (CLT) iniciou-se em 06/06/1985, no cargo de Agente Administrativo I, d...</code> |
| <code>concessão aposentadoria legalidade processo administrativo parecer cge lei 16168/07</code> | <code>Este trecho faz parte da seção "RELATÓRIO E VOTO" de um Acórdão do Tribunal de Contas do Estado de Goiás que analisa e decide sobre a legalidade de uma concessão de aposentadoria. Ele detalha a análise da Relatora, apresentando pareceres técnicos favoráveis e abordando pontos específicos levantados pelo Ministério Público de Contas sobre aspectos processuais e de legalidade, como a ausência de parecer da Controladoria-Geral do Estado e a validade de enquadramentos passados.. 5 . Por sua vez, a Auditoria se manifestou pela legalidade e registro do ato de concessão de aposentadoria. 6. É o relatório. Passo ao voto. 7. A competência do Tribunal de Contas para registro do ato em apreço tem amparo no artigo 1o, inciso IV, da Lei no 16.168/07 artigo 26, inciso III, da Constituição do Estado de Goiás. 8 . Observa-se que o feito está devidamente instruído e compatível com a legislação em vigor, razão pela qual não vislumbro óbice ao registro do ato. 9. A documentação acostada aos autos supre c...</code> | <code>Este trecho, localizado no relatório do processo, detalha as etapas de análise interna no Tribunal de Contas para o registro das admissões da SANEAGO. Ele descreve as verificações preliminares e a conclusão do serviço de fiscalização que recomendou a legalidade dos atos, iniciando a apresentação dos nomes dos admitidos.. 2. Devidamente instruídos e ordenados os atos no âmbito da Administração, vem o feito ao Tribunal de Contas para o devido controle de legalidade e registro. 3 . No Tribunal de Contas, preliminarmente, o Serviço de Registro informou que não foram encontrados registros em nome dos servidores interessados (evento 13). 4 . Em seguida, o Serviço de Fiscalização de Atos de Pessoal I (evento 46), informou que a instrução processual está completa nos termos dos atos normativos de regência, como também foram observados todos os pressupostos legais relativos ao preenchimento dos requisitos para registro dos atos de admissão , razão por que propôs considerar legal os atos de admi...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 1
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `warmup_ratio`: 0.1
- `prompts`: task: sentence similarity | query:
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 1
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: task: sentence similarity | query:
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:-----:|:-----:|:-------------:|
| 1.0 | 49816 | 0.4105 |
| 2.0 | 99632 | 0.1797 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 5.1.0
- Transformers: 4.57.0.dev0
- PyTorch: 2.7.1+cu126
- Accelerate: 1.10.0
- Datasets: 3.6.0
- Tokenizers: 0.22.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
AXERA-TECH/YOLOv7-Face
|
AXERA-TECH
| 2025-09-22T11:33:31Z | 4 | 0 | null |
[
"onnx",
"YOLOv7",
"YOLOv7-Face",
"object-detection",
"en",
"license:mit",
"region:us"
] |
object-detection
| 2025-03-23T07:44:22Z |
---
license: mit
language:
- en
pipeline_tag: object-detection
tags:
- YOLOv7
- YOLOv7-Face
---
# YOLOv7-FACE
This version of YOLOv7-FACE has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 3.4
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through
- [The repo of AXera Platform](https://github.com/AXERA-TECH/ax-samples), which you can get the detial of guide
- [Pulsar2 Link, How to Convert ONNX to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/pulsar2/introduction.html)
## Support Platform
- AX650
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
- AX630C
- [爱芯派2](https://axera-pi-2-docs-cn.readthedocs.io/zh-cn/latest/index.html)
- [Module-LLM](https://docs.m5stack.com/zh_CN/module/Module-LLM)
- [LLM630 Compute Kit](https://docs.m5stack.com/zh_CN/core/LLM630%20Compute%20Kit)
|Chips|cost|
|--|--|
|AX650| 12.6 ms |
|AX630C| TBD ms |
## How to use
Download all files from this repository to the device
```
root@ax650:~/YOLOv7-Face# tree
.
|-- ax650
| `-- yolov7-face.axmodel
|-- ax_yolov7_face
|-- selfie.jpg
`-- yolov7_face_out.jpg
```
### Inference
Input image:

#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro)
```
root@ax650:~/YOLOv7-Face# ./ax_yolov7_face -m ax650/yolov7-face.axmodel -i selfie.jpg
--------------------------------------
model file : ax650/yolov7-face.axmodel
image file : selfie.jpg
img_h, img_w : 640 640
--------------------------------------
Engine creating handle is done.
Engine creating context is done.
Engine get io info is done.
Engine alloc io is done.
Engine push input is done.
--------------------------------------
post process cost time:8.70 ms
--------------------------------------
Repeat 1 times, avg time 12.59 ms, max_time 12.59 ms, min_time 12.59 ms
--------------------------------------
detection num: 174
0: 91%, [1137, 869, 1283, 1065], face
0: 91%, [1424, 753, 1570, 949], face
......
0: 45%, [1658, 362, 1677, 387], face
0: 45%, [1445, 437, 1467, 462], face
--------------------------------------
root@ax650:~/YOLOv7-Face#
```
Output image:

#### Inference with M.2 Accelerator card
```
(base) axera@raspberrypi:~/lhj/YOLOv7-Face $ ./axcl_aarch64/axcl_yolov7_face -m ax650/yolov7-face.axmodel -i selfie.jpg
--------------------------------------
model file : ax650/yolov7-face.axmodel
image file : selfie.jpg
img_h, img_w : 640 640
--------------------------------------
axclrtEngineCreateContextt is done.
axclrtEngineGetIOInfo is done.
grpid: 0
input size: 1
name: images
1 x 640 x 640 x 3
output size: 3
name: 511
1 x 80 x 80 x 63
name: 520
1 x 40 x 40 x 63
name: 529
1 x 20 x 20 x 63
==================================================
Engine push input is done.
--------------------------------------
post process cost time:8.29 ms
--------------------------------------
Repeat 1 times, avg time 12.23 ms, max_time 12.23 ms, min_time 12.23 ms
--------------------------------------
detection num: 277
0: 91%, [1137, 869, 1283, 1065], face
0: 91%, [1424, 753, 1570, 949], face
0: 89%, [1305, 764, 1403, 900], face
0: 87%, [1738, 786, 1796, 860], face
......
0: 20%, [1120, 570, 1145, 604], face
0: 20%, [1025, 390, 1041, 413], face
--------------------------------------
```
Output image:

|
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_ROUND3-checkpoint-epoch-100
|
MattBou00
| 2025-09-22T11:31:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T11:31:00Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-15-41/checkpoints/checkpoint-epoch-100")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-15-41/checkpoints/checkpoint-epoch-100")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-15-41/checkpoints/checkpoint-epoch-100")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
qualiaadmin/919b93d3-322d-4db5-8a96-f47676b69c2e
|
qualiaadmin
| 2025-09-22T11:26:20Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Calvert0921/SmolVLA_LiftBlueCubeDouble_Franka_200",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-22T11:25:04Z |
---
base_model: lerobot/smolvla_base
datasets: Calvert0921/SmolVLA_LiftBlueCubeDouble_Franka_200
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- smolvla
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
katucheftitaniumcuttingboard/katucheftitaniumcuttingboard
|
katucheftitaniumcuttingboard
| 2025-09-22T11:24:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-22T11:24:07Z |
# KatuChef Titanium Cutting Board – Durable, Hygienic, Survival-Ready Design
## Why the KatuChef Titanium Cutting Board Is the Last Cutting Board You'll Ever Need
When you're at the point of comparing cutting boards, you're not just looking for any surface to chop vegetables. You're looking for a long-lasting, safe, and multifunctional solution that fits into your lifestyle—whether that’s in your kitchen, at the campsite, or on a survival trek. That’s where the **[KatuChef Titanium Cutting Board](https://www.diginear.com/2PGQH1JJ/217MJKS3/)** enters the conversation—not as an alternative, but as the definitive choice.
## **[Don’t Wait – Buy KatuChef Titanium Cutting Board Today Upgrade Your Kitchen](https://www.diginear.com/2PGQH1JJ/217MJKS3/)**
## Built to Outperform: What Makes KatuChef Titanium Cutting Board Stand Out?
The cutting board market is flooded with options—plastic, bamboo, glass, wood, and even hybrid boards. But few can match the innovation, resilience, and versatility of the KatuChef Titanium Cutting Board. Why? Because titanium isn’t just a buzzword—it’s a game-changer.
## Here’s what sets it apart:
Military-Grade Titanium Construction: Unlike conventional materials that warp, crack, or retain bacteria, titanium is non-porous, corrosion-resistant, and ultra-durable. You’re investing in a board that lasts for decades, not months.
Knife-Friendly Surface: While some hard boards can dull your knives over time, the KatuChef Titanium Cutting Board is engineered to balance durability with edge preservation, so your premium blades stay sharper for longer.
Hygienic & Odor-Free: Say goodbye to lingering garlic smells or bacterial buildup. This board resists odors and is easy to sanitize—ideal for both raw and cooked food prep.
Ultra-Light & Portable: Despite its strength, this board is surprisingly lightweight, making it perfect for camping, hiking, RV kitchens, or bug-out bags. The slim design fits into compact spaces without sacrificing surface area.
Multi-Functional Design: It’s more than a cutting board—it’s a survival tool. You can use it as a heat shield, emergency signaling device, or even as a makeshift plate or food tray in outdoor scenarios.
## Who Is the **[KatuChef Titanium cutting board for kitchen](https://www.diginear.com/2PGQH1JJ/217MJKS3/)**
**This is for:**
Home Chefs who demand professional-grade tools in their kitchen
Outdoor Enthusiasts who value gear that serves more than one purpose
Preppers and Survivalists who understand the importance of durable, multi-use gear
Minimalists who want fewer, better things
Eco-Conscious Consumers who prefer long-lasting products over disposable plastics
If that sounds like you, then you already understand why this product is worth the investment.
## Real-World Durability: Tested in the Kitchen and the Wild
What truly differentiates the **[KatuChef Titanium Cutting Board](https://www.diginear.com/2PGQH1JJ/217MJKS3/)** is its real-world performance. Whether you’re slicing juicy tomatoes on your countertop or filleting a fish riverside, this board handles it all—without warping, cracking, or staining.
Titanium also handles extreme temperatures with ease. That means you can use it as a surface for hot pots or even on campfires (when needed in survival settings). Try that with plastic or wood.
## A Hygienic Choice in an Age of Uncertainty
In today’s world, food safety and hygiene are more important than ever. Wooden boards, while aesthetic, can harbor bacteria in their grains. Plastic boards stain, warp, and can leach microplastics over time. Glass can shatter, and bamboo can split.
The KatuChef Titanium Cutting Board is naturally resistant to microbial growth and doesn’t absorb liquids. It’s dishwasher safe and can be cleaned with boiling water or disinfectants without damaging its surface.
For households that take health seriously—or for off-grid adventurers who can’t afford contamination—it’s a clear winner.
## **[Don’t Wait – Buy KatuChef Titanium Cutting Board Today Upgrade Your Kitchen](https://www.diginear.com/2PGQH1JJ/217MJKS3/)**
## What Customers Are Saying
Buyers who have made the switch to the KatuChef Titanium Cutting Board report:
Noticeably cleaner and odor-free food prep
Peace of mind knowing their cutting board won’t chip or splinter
Unexpected versatility in outdoor cooking and survival uses
Long-term satisfaction, saying they’ll “never go back” to conventional boards
This isn’t hype—it’s real feedback from people who value quality and longevity.
## Ready to Upgrade?
You’ve done your research. You’ve compared plastic, wood, and glass. You understand the value of quality materials. Now you’re looking for a cutting board that matches your expectations—for performance, durability, hygiene, and versatility.
The **[KatuChef Titanium Cutting Board](https://www.diginear.com/2PGQH1JJ/217MJKS3/)** isn’t a trend. It’s a long-term solution built for serious users. Whether you're preparing a gourmet meal at home or cleaning your catch in the backcountry, this is the board you want by your side.
#### Make the switch. Invest in reliability. Choose KatuChef.
**❗❗ 👇 Click Here To Buy KatuChef Titanium Cutting Board 👇 ❗❗**
https://www.diginear.com/2PGQH1JJ/217MJKS3/
**More Link**
https://katucheftitaniumcuttingboard5.wordpress.com/
https://site-23ybohhr6.godaddysites.com/
https://katuchef-titanium-cutting-board-4.jimdosite.com/
https://zenodo.org/records/17175190
https://katucheftitaniumcuttingboardus.quora.com/
https://www.provenexpert.com/katuchef-titanium-cutting-board5/
https://www.pixiv.net/en/artworks/135409111
https://www.reddit.com/user/KatuChefcuttingboard/
https://filmfreeway.com/katucheftitaniumcuttingboardus
|
mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF
|
mradermacher
| 2025-09-22T11:24:17Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"programming",
"code generation",
"code",
"coding",
"coder",
"chat",
"brainstorm",
"qwen",
"qwen3",
"qwencoder",
"brainstorm 20x",
"creative",
"all uses cases",
"Jan-V1",
"float32",
"horror",
"32 bit precision",
"science fiction",
"fantasy",
"Star Trek",
"finetune",
"thinking",
"reasoning",
"unsloth",
"en",
"dataset:progs2002/star-trek-tng-scripts",
"base_model:DavidAU/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B",
"base_model:quantized:DavidAU/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-22T10:40:42Z |
---
base_model: DavidAU/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B
datasets:
- progs2002/star-trek-tng-scripts
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- programming
- code generation
- code
- coding
- coder
- chat
- code
- chat
- brainstorm
- qwen
- qwen3
- qwencoder
- brainstorm 20x
- creative
- all uses cases
- Jan-V1
- float32
- horror
- 32 bit precision
- science fiction
- fantasy
- Star Trek
- finetune
- thinking
- reasoning
- unsloth
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/DavidAU/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q4_0.gguf) | i1-Q4_0 | 3.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 3.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q4_1.gguf) | i1-Q4_1 | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q6_K.gguf) | i1-Q6_K | 5.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
afiyarah/nomic-ins-make
|
afiyarah
| 2025-09-22T11:21:07Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"nomic_bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:9431",
"loss:CosineSimilarityLoss",
"custom_code",
"arxiv:1908.10084",
"base_model:nomic-ai/nomic-embed-text-v1.5",
"base_model:finetune:nomic-ai/nomic-embed-text-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-22T11:20:53Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:9431
- loss:CosineSimilarityLoss
base_model: nomic-ai/nomic-embed-text-v1.5
widget:
- source_sentence: 'In the car insurance domain, represent this car make entity in
arabic for entity similarity matching: تي زد كو'
sentences:
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: ليبهر'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: ماكسوس'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: بيكويرسا'
- source_sentence: 'In the car insurance domain, represent this car make entity in
arabic for entity similarity matching: مركبة atv'
sentences:
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: اس دي'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: شفرولية'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: جي ام سي'
- source_sentence: 'In the car insurance domain, represent this car make entity in
arabic for entity similarity matching: باكهو'
sentences:
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: سيف'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: مرسيدس'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: اكسوهو'
- source_sentence: 'In the car insurance domain, represent this car make entity in
arabic for entity similarity matching: فوكـي'
sentences:
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: فوكي'
- 'In the car insurance domain, represent this car make entity in english for entity
similarity matching: batubifang'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: سينبوجن'
- source_sentence: 'In the car insurance domain, represent this car make entity in
arabic for entity similarity matching: آمي'
sentences:
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: تريومف'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: شانكسي'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: دي اف ام'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on nomic-ai/nomic-embed-text-v1.5
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: insurance val
type: insurance-val
metrics:
- type: pearson_cosine
value: 0.8822604422337141
name: Pearson Cosine
- type: spearman_cosine
value: 0.6655851533966861
name: Spearman Cosine
---
# SentenceTransformer based on nomic-ai/nomic-embed-text-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) <!-- at revision e5cf08aadaa33385f5990def41f7a23405aec398 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'NomicBertModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'In the car insurance domain, represent this car make entity in arabic for entity similarity matching: آمي',
'In the car insurance domain, represent this car make entity in arabic for entity similarity matching: دي اف ام',
'In the car insurance domain, represent this car make entity in arabic for entity similarity matching: تريومف',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.2315, 0.2338],
# [0.2315, 1.0000, 0.1655],
# [0.2338, 0.1655, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `insurance-val`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8823 |
| **spearman_cosine** | **0.6656** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 9,431 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 21 tokens</li><li>mean: 25.44 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 25.61 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 0.1</li><li>mean: 0.27</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:----------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: تام</code> | <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: بي بي أم</code> | <code>0.19999999999999998</code> |
| <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: تي في آر</code> | <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: أبارث</code> | <code>0.19999999999999998</code> |
| <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: تي زد كو</code> | <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: فوسو</code> | <code>0.19999999999999998</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | insurance-val_spearman_cosine |
|:------:|:----:|:-------------:|:-----------------------------:|
| 0.0983 | 58 | - | 0.4180 |
| 0.1966 | 116 | - | 0.5385 |
| 0.2949 | 174 | - | 0.5606 |
| 0.3932 | 232 | - | 0.5969 |
| 0.4915 | 290 | - | 0.5867 |
| 0.5898 | 348 | - | 0.5822 |
| 0.6881 | 406 | - | 0.6342 |
| 0.7864 | 464 | - | 0.6071 |
| 0.8475 | 500 | 0.049 | - |
| 0.8847 | 522 | - | 0.6316 |
| 0.9831 | 580 | - | 0.6414 |
| 1.0 | 590 | - | 0.6270 |
| 1.0814 | 638 | - | 0.6230 |
| 1.1797 | 696 | - | 0.6232 |
| 1.2780 | 754 | - | 0.6161 |
| 1.3763 | 812 | - | 0.6348 |
| 1.4746 | 870 | - | 0.6566 |
| 1.5729 | 928 | - | 0.6656 |
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.56.1
- PyTorch: 2.8.0+cu126
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
romolocaponera/ppo-SnowballTarget
|
romolocaponera
| 2025-09-22T11:18:57Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-09-22T11:18:54Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: romolocaponera/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
aamijar/Llama-2-7b-hf-dora-r8-boolq-epochs0
|
aamijar
| 2025-09-22T11:12:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T11:12:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AlekseyCalvin/LYRICAL_MT_ru2en_3a7_Yandex8b_EMAbetas05to098
|
AlekseyCalvin
| 2025-09-22T11:08:49Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:yandex/YandexGPT-5-Lite-8B-pretrain",
"lora",
"orpo",
"transformers",
"trl",
"unsloth",
"arxiv:1910.09700",
"base_model:yandex/YandexGPT-5-Lite-8B-pretrain",
"region:us"
] | null | 2025-09-11T12:07:29Z |
---
base_model: yandex/YandexGPT-5-Lite-8B-pretrain
library_name: peft
tags:
- base_model:adapter:yandex/YandexGPT-5-Lite-8B-pretrain
- lora
- orpo
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
prasun77/qwen-finance-lora
|
prasun77
| 2025-09-22T11:06:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T11:01:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kznmp3/blockassist
|
kznmp3
| 2025-09-22T11:05:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lively raging hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-10T04:53:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lively raging hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Accountable-SA/gemma-3-270m-it-base-Q4_K_M-GGUF
|
Accountable-SA
| 2025-09-22T11:03:17Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Accountable-SA/gemma-3-270m-it-base",
"base_model:quantized:Accountable-SA/gemma-3-270m-it-base",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T11:03:13Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: Accountable-SA/gemma-3-270m-it-base
---
# massimogiuseppe/gemma-3-270m-it-base-Q4_K_M-GGUF
This model was converted to GGUF format from [`Accountable-SA/gemma-3-270m-it-base`](https://huggingface.co/Accountable-SA/gemma-3-270m-it-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Accountable-SA/gemma-3-270m-it-base) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q4_K_M-GGUF --hf-file gemma-3-270m-it-base-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q4_K_M-GGUF --hf-file gemma-3-270m-it-base-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q4_K_M-GGUF --hf-file gemma-3-270m-it-base-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q4_K_M-GGUF --hf-file gemma-3-270m-it-base-q4_k_m.gguf -c 2048
```
|
DBD-research-group/Bird-MAE-Huge
|
DBD-research-group
| 2025-09-22T11:01:45Z | 138 | 0 |
transformers
|
[
"transformers",
"safetensors",
"feature-extraction",
"audio-classification",
"audio",
"custom_code",
"dataset:DBD-research-group/BirdSet",
"arxiv:2504.12880",
"region:us"
] |
audio-classification
| 2025-06-26T15:29:24Z |
---
datasets:
- DBD-research-group/BirdSet
pipeline_tag: audio-classification
library_name: transformers
tags:
- audio-classification
- audio
---
# Bird-MAE-Base: Can Masked Autoencoders Also Listen to Birds?
- **Paper**: [ArXiv](https://arxiv.org/abs/2504.12880)
- **Repo**: [GitHub](https://github.com/DBD-research-group/Bird-MAE)
## Abstract
Masked Autoencoders (MAEs) have shown competitive results in audio classification by learning rich semantic representations through an efficient self-supervised reconstruction task. However, general-purpose models fail to generalize well when applied directly to fine-grained audio domains. Specifically, bird-sound classification requires distinguishing subtle inter-species differences and managing high intra-species acoustic variability, thereby revealing the performance limitations of general-domain Audio-MAE models. This work demonstrates that bridging this domain gap requires more than domain-specific pretraining data; adapting the entire training pipeline is crucial. We systematically revisit and adapt the pretraining recipe, fine-tuning methods, and frozen feature utilization to bird sounds using BirdSet, a large-scale bioacoustic dataset comparable to AudioSet. Our resulting Bird-MAE achieves new state-of-the-art results in BirdSet's multi-label classification benchmark. Additionally, we introduce the parameter-efficient prototypical probing, enhancing the utility of frozen MAE representations and closely approaching fine-tuning performance in low-resource settings. Bird-MAE's prototypical probes outperform linear probing by up to 37%_\text{p} in MAP and narrow the gap to fine-tuning to approximately 3.3%_\text{p} on average across BirdSet downstream tasks. Bird-MAE also demonstrates robust few-shot capabilities with prototypical probing in our newly established few-shot benchmark on BirdSet, highlighting the potential of tailored self-supervised learning pipelines for fine-grained audio domains.
### Evaluation Results
**Table 1**
Probing results on the multi-label classification benchmark BirdSet with full data (MAP%).
Comparison of linear probing vs. prototypical probing using frozen encoder representations. Models follow
the evaluation protocol of BirdSet. **Best** and results are highlighted.
| Model | Arch. | Probing | HSNval | POW | PER | NES | UHH | NBP | SSW | SNE |
|-------------|-----------|---------|--------|-------|-------|-------|-------|-------|-------|-------|
| BirdAVES | HUBERT | linear | 14.91 | 12.60 | 5.41 | 6.36 | 11.76 | 33.68 | 4.55 | 7.86 |
| BirdAVES | HUBERT | proto | 32.52 | 19.98 | 5.14 | 11.87 | 15.41 | 39.85 | 7.71 | 9.59 |
| SimCLR | CvT-13 | linear | 17.29 | 17.89 | 6.66 | 10.64 | 7.43 | 26.35 | 6.99 | 8.92 |
| SimCLR | CvT-13 | proto | 18.00 | 17.02 | 3.37 | 7.91 | 7.08 | 26.60 | 5.36 | 8.83 |
<br>
| Audio-MAE | ViT-B/16 | linear | 8.77 | 10.36 | 3.72 | 4.48 | 10.78 | 24.70 | 2.50 | 5.60 |
| Audio-MAE | ViT-B/16 | proto | 19.42 | 19.58 | 9.34 | 15.53 | 16.84 | 35.32 | 8.81 | 12.34 |
<br>
| Bird-MAE | ViT-B/16 | linear | 13.06 | 14.28 | 5.63 | 8.16 | 14.75 | 34.57 | 5.59 | 8.16 |
| Bird-MAE | ViT-B/16 | proto | 43.84 | 37.67 | 20.72 | 28.11 | 26.46 | 62.68 | 22.69 | 22.16 |
| Bird-MAE | ViT-B/16 | linear | 12.44 | 16.20 | 6.63 | 8.31 | 15.41 | 41.91 | 5.75 | 7.94 |
| Bird-MAE | ViT-B/16 | proto | **49.97** | **51.73** | **31.38** | **37.80** | **29.97** | **69.50** | **37.74** | **29.96** |
| Bird-MAE | ViT-L/16 | linear | 13.25 | 14.82 | 7.29 | 7.93 | 12.99 | 38.71 | 5.60 | 7.84 |
| Bird-MAE | ViT-L/16 | proto | 47.52 | 49.65 | 30.43 | 35.85 | 28.91 | 69.13 | 35.83 | 28.31 |
For more details refer to the paper provided.
## Example
This model can be easily loaded and used for inference with the `transformers` library.
> Note that this is the base model and you need to finetune the classification head.
> We provide the option to use a Linear and Proto Probing head.
```python
from transformers import AutoFeatureExtractor, AutoModel
import librosa
# Load the model and feature extractor
model = AutoModel.from_pretrained("DBD-research-group/Bird-MAE-Huge",trust_remote_code=True)
feature_extractor = AutoFeatureExtractor.from_pretrained("DBD-research-group/Bird-MAE-Huge", trust_remote_code=True)
model.eval()
# Load an example audio file
audio_path = librosa.ex('robin')
# The model is trained on audio sampled at 32,000 Hz
audio, sample_rate = librosa.load(audio_path, sr=32_000)
mel_spectrogram = feature_extractor(audio)
# embedding with shape corresponding to model size
embedding = model(mel_spectrogram)
```
## Citation
```
@misc{rauch2025audiomae,
title={Can Masked Autoencoders Also Listen to Birds?},
author={Lukas Rauch and René Heinrich and Ilyass Moummad and Alexis Joly and Bernhard Sick and Christoph Scholz},
year={2025},
eprint={2504.12880},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2504.12880},
}
```
|
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_ROUND5-checkpoint-epoch-80
|
MattBou00
| 2025-09-22T11:00:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T10:59:04Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_10-46-42/checkpoints/checkpoint-epoch-80")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_10-46-42/checkpoints/checkpoint-epoch-80")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_10-46-42/checkpoints/checkpoint-epoch-80")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
qualiaadmin/6b7e3e9d-a230-40ba-88fc-9badc901e809
|
qualiaadmin
| 2025-09-22T11:00:02Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Calvert0921/SmolVLA_LiftBlueCubeDouble_Franka_200",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-22T10:58:23Z |
---
base_model: lerobot/smolvla_base
datasets: Calvert0921/SmolVLA_LiftBlueCubeDouble_Franka_200
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- lerobot
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
DBD-research-group/Bird-MAE-Base
|
DBD-research-group
| 2025-09-22T10:58:34Z | 488 | 0 |
transformers
|
[
"transformers",
"safetensors",
"feature-extraction",
"audio-classification",
"audio",
"custom_code",
"dataset:DBD-research-group/BirdSet",
"arxiv:2504.12880",
"region:us"
] |
audio-classification
| 2025-06-26T14:44:21Z |
---
datasets:
- DBD-research-group/BirdSet
pipeline_tag: audio-classification
library_name: transformers
tags:
- audio-classification
- audio
---
# Disclaimer: There might be some error in the models, we have to check it.
# Bird-MAE-Base: Can Masked Autoencoders Also Listen to Birds?
- **Paper**: [ArXiv](https://arxiv.org/abs/2504.12880)
- **Repo**: [GitHub](https://github.com/DBD-research-group/Bird-MAE)
## Abstract
Masked Autoencoders (MAEs) have shown competitive results in audio classification by learning rich semantic representations through an efficient self-supervised reconstruction task. However, general-purpose models fail to generalize well when applied directly to fine-grained audio domains. Specifically, bird-sound classification requires distinguishing subtle inter-species differences and managing high intra-species acoustic variability, thereby revealing the performance limitations of general-domain Audio-MAE models. This work demonstrates that bridging this domain gap requires more than domain-specific pretraining data; adapting the entire training pipeline is crucial. We systematically revisit and adapt the pretraining recipe, fine-tuning methods, and frozen feature utilization to bird sounds using BirdSet, a large-scale bioacoustic dataset comparable to AudioSet. Our resulting Bird-MAE achieves new state-of-the-art results in BirdSet's multi-label classification benchmark. Additionally, we introduce the parameter-efficient prototypical probing, enhancing the utility of frozen MAE representations and closely approaching fine-tuning performance in low-resource settings. Bird-MAE's prototypical probes outperform linear probing by up to 37%_\text{p} in MAP and narrow the gap to fine-tuning to approximately 3.3%_\text{p} on average across BirdSet downstream tasks. Bird-MAE also demonstrates robust few-shot capabilities with prototypical probing in our newly established few-shot benchmark on BirdSet, highlighting the potential of tailored self-supervised learning pipelines for fine-grained audio domains.
### Evaluation Results
**Table 1**
Probing results on the multi-label classification benchmark BirdSet with full data (MAP%).
Comparison of linear probing vs. prototypical probing using frozen encoder representations. Models follow
the evaluation protocol of BirdSet. **Best** and results are highlighted.
| Model | Arch. | Probing | HSNval | POW | PER | NES | UHH | NBP | SSW | SNE |
|-------------|-----------|---------|--------|-------|-------|-------|-------|-------|-------|-------|
| BirdAVES | HUBERT | linear | 14.91 | 12.60 | 5.41 | 6.36 | 11.76 | 33.68 | 4.55 | 7.86 |
| BirdAVES | HUBERT | proto | 32.52 | 19.98 | 5.14 | 11.87 | 15.41 | 39.85 | 7.71 | 9.59 |
| SimCLR | CvT-13 | linear | 17.29 | 17.89 | 6.66 | 10.64 | 7.43 | 26.35 | 6.99 | 8.92 |
| SimCLR | CvT-13 | proto | 18.00 | 17.02 | 3.37 | 7.91 | 7.08 | 26.60 | 5.36 | 8.83 |
<br>
| Audio-MAE | ViT-B/16 | linear | 8.77 | 10.36 | 3.72 | 4.48 | 10.78 | 24.70 | 2.50 | 5.60 |
| Audio-MAE | ViT-B/16 | proto | 19.42 | 19.58 | 9.34 | 15.53 | 16.84 | 35.32 | 8.81 | 12.34 |
<br>
| Bird-MAE | ViT-B/16 | linear | 13.06 | 14.28 | 5.63 | 8.16 | 14.75 | 34.57 | 5.59 | 8.16 |
| Bird-MAE | ViT-B/16 | proto | 43.84 | 37.67 | 20.72 | 28.11 | 26.46 | 62.68 | 22.69 | 22.16 |
| Bird-MAE | ViT-B/16 | linear | 12.44 | 16.20 | 6.63 | 8.31 | 15.41 | 41.91 | 5.75 | 7.94 |
| Bird-MAE | ViT-B/16 | proto | **49.97** | **51.73** | **31.38** | **37.80** | **29.97** | **69.50** | **37.74** | **29.96** |
| Bird-MAE | ViT-L/16 | linear | 13.25 | 14.82 | 7.29 | 7.93 | 12.99 | 38.71 | 5.60 | 7.84 |
| Bird-MAE | ViT-L/16 | proto | 47.52 | 49.65 | 30.43 | 35.85 | 28.91 | 69.13 | 35.83 | 28.31 |
For more details refer to the paper provided.
## Example
This model can be easily loaded and used for inference with the `transformers` library.
> Note that this is the base model and you need to finetune the classification head.
> We provide the option to use a Linear and Proto Probing head.
```python
from transformers import AutoFeatureExtractor, AutoModel
import librosa
# Load the model and feature extractor
model = AutoModel.from_pretrained("DBD-research-group/Bird-MAE-Base",trust_remote_code=True)
feature_extractor = AutoFeatureExtractor.from_pretrained("DBD-research-group/Bird-MAE-Base", trust_remote_code=True)
model.eval()
# Load an example audio file
audio_path = librosa.ex('robin')
# The model is trained on audio sampled at 32,000 Hz
audio, sample_rate = librosa.load(audio_path, sr=32_000)
mel_spectrogram = feature_extractor(audio)
# embedding with shape corresponding to model size
embedding = model(mel_spectrogram)
```
## Citation
```
@misc{rauch2025audiomae,
title={Can Masked Autoencoders Also Listen to Birds?},
author={Lukas Rauch and René Heinrich and Ilyass Moummad and Alexis Joly and Bernhard Sick and Christoph Scholz},
year={2025},
eprint={2504.12880},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2504.12880},
}
```
|
Accountable-SA/gemma-3-270m-it-base-Q3_K_M-GGUF
|
Accountable-SA
| 2025-09-22T10:56:00Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Accountable-SA/gemma-3-270m-it-base",
"base_model:quantized:Accountable-SA/gemma-3-270m-it-base",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T10:55:56Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: Accountable-SA/gemma-3-270m-it-base
---
# massimogiuseppe/gemma-3-270m-it-base-Q3_K_M-GGUF
This model was converted to GGUF format from [`Accountable-SA/gemma-3-270m-it-base`](https://huggingface.co/Accountable-SA/gemma-3-270m-it-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Accountable-SA/gemma-3-270m-it-base) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q3_K_M-GGUF --hf-file gemma-3-270m-it-base-q3_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q3_K_M-GGUF --hf-file gemma-3-270m-it-base-q3_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q3_K_M-GGUF --hf-file gemma-3-270m-it-base-q3_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q3_K_M-GGUF --hf-file gemma-3-270m-it-base-q3_k_m.gguf -c 2048
```
|
felixZzz/q04m4jep-step_00400
|
felixZzz
| 2025-09-22T10:55:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T10:53:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Tanzania-0.3B-i1-GGUF
|
mradermacher
| 2025-09-22T10:48:17Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"creative",
"roleplay",
"story-telling",
"story-writing",
"en",
"dataset:practical-dreamer/RPGPT_PublicDomain-ShareGPT",
"dataset:Gryphe/Opus-WritingPrompts",
"base_model:XeTute/Tanzania-0.3B",
"base_model:quantized:XeTute/Tanzania-0.3B",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-09-22T10:41:51Z |
---
base_model: XeTute/Tanzania-0.3B
datasets:
- practical-dreamer/RPGPT_PublicDomain-ShareGPT
- Gryphe/Opus-WritingPrompts
language:
- en
library_name: transformers
license: gemma
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- creative
- roleplay
- story-telling
- story-writing
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/XeTute/Tanzania-0.3B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Tanzania-0.3B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Tanzania-0.3B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-IQ1_M.gguf) | i1-IQ1_M | 0.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-IQ2_S.gguf) | i1-IQ2_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-IQ2_M.gguf) | i1-IQ2_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-IQ3_S.gguf) | i1-IQ3_S | 0.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-Q2_K.gguf) | i1-Q2_K | 0.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-IQ3_M.gguf) | i1-IQ3_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-Q4_0.gguf) | i1-Q4_0 | 0.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-Q4_1.gguf) | i1-Q4_1 | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.3B-i1-GGUF/resolve/main/Tanzania-0.3B.i1-Q6_K.gguf) | i1-Q6_K | 0.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
JaspervanLeuven/act_piper_ab_test
|
JaspervanLeuven
| 2025-09-22T10:47:57Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:JaspervanLeuven/T1_E50_pnp_3D_box_19_09_2025",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-22T10:47:50Z |
---
datasets: JaspervanLeuven/T1_E50_pnp_3D_box_19_09_2025
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- robotics
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
zjhhhh/qwen2.5_3B_Instruct_reward_beta_1_eta_1e5_step_312_final
|
zjhhhh
| 2025-09-22T10:47:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T10:46:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nnilayy/dreamer_stride_256-binary-arousal-Kfold-4-stride_256
|
nnilayy
| 2025-09-22T10:44:50Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-09-22T10:44:43Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
mradermacher/Advanced_Risk_Dice_Qwen3-4B-GGUF
|
mradermacher
| 2025-09-22T10:41:04Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:yujunzhou/Advanced_Risk_Dice_Qwen3-4B",
"base_model:quantized:yujunzhou/Advanced_Risk_Dice_Qwen3-4B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T10:20:10Z |
---
base_model: yujunzhou/Advanced_Risk_Dice_Qwen3-4B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/yujunzhou/Advanced_Risk_Dice_Qwen3-4B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Advanced_Risk_Dice_Qwen3-4B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Advanced_Risk_Dice_Qwen3-4B-GGUF/resolve/main/Advanced_Risk_Dice_Qwen3-4B.Q2_K.gguf) | Q2_K | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Advanced_Risk_Dice_Qwen3-4B-GGUF/resolve/main/Advanced_Risk_Dice_Qwen3-4B.Q3_K_S.gguf) | Q3_K_S | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Advanced_Risk_Dice_Qwen3-4B-GGUF/resolve/main/Advanced_Risk_Dice_Qwen3-4B.Q3_K_M.gguf) | Q3_K_M | 2.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Advanced_Risk_Dice_Qwen3-4B-GGUF/resolve/main/Advanced_Risk_Dice_Qwen3-4B.Q3_K_L.gguf) | Q3_K_L | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Advanced_Risk_Dice_Qwen3-4B-GGUF/resolve/main/Advanced_Risk_Dice_Qwen3-4B.IQ4_XS.gguf) | IQ4_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Advanced_Risk_Dice_Qwen3-4B-GGUF/resolve/main/Advanced_Risk_Dice_Qwen3-4B.Q4_K_S.gguf) | Q4_K_S | 2.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Advanced_Risk_Dice_Qwen3-4B-GGUF/resolve/main/Advanced_Risk_Dice_Qwen3-4B.Q4_K_M.gguf) | Q4_K_M | 2.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Advanced_Risk_Dice_Qwen3-4B-GGUF/resolve/main/Advanced_Risk_Dice_Qwen3-4B.Q5_K_S.gguf) | Q5_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Advanced_Risk_Dice_Qwen3-4B-GGUF/resolve/main/Advanced_Risk_Dice_Qwen3-4B.Q5_K_M.gguf) | Q5_K_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Advanced_Risk_Dice_Qwen3-4B-GGUF/resolve/main/Advanced_Risk_Dice_Qwen3-4B.Q6_K.gguf) | Q6_K | 3.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Advanced_Risk_Dice_Qwen3-4B-GGUF/resolve/main/Advanced_Risk_Dice_Qwen3-4B.Q8_0.gguf) | Q8_0 | 4.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Advanced_Risk_Dice_Qwen3-4B-GGUF/resolve/main/Advanced_Risk_Dice_Qwen3-4B.f16.gguf) | f16 | 8.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758537408
|
poolkiltzn
| 2025-09-22T10:38:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T10:37:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
duongve/NetaYume-Lumina-Image-2.0
|
duongve
| 2025-09-22T10:36:13Z | 2,902 | 13 |
diffusion-single-file
|
[
"diffusion-single-file",
"stable-diffusion",
"text-to-image",
"comfyui",
"base_model:Alpha-VLLM/Lumina-Image-2.0",
"base_model:finetune:Alpha-VLLM/Lumina-Image-2.0",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-06T09:08:01Z |
---
pipeline_tag: text-to-image
license: apache-2.0
base_model:
- neta-art/Neta-Lumina
- Alpha-VLLM/Lumina-Image-2.0
tags:
- stable-diffusion
- text-to-image
- comfyui
- diffusion-single-file
---
# NetaYume Lumina Image v2.0

---
**I. Introduction**
NetaYume Lumina is a text-to-image model fine-tuned from [Neta Lumina](https://huggingface.co/neta-art/Neta-Lumina), a high-quality anime-style image generation model developed by [Neta.art Lab](https://huggingface.co/neta-art). It builds upon [Lumina-Image-2.0](https://huggingface.co/Alpha-VLLM/Lumina-Image-2.0), an open-source base model released by the [Alpha-VLLM](https://huggingface.co/Alpha-VLLM) team at Shanghai AI Laboratory.
This model was trained with the goal of not only generating realistic human images but also producing high-quality anime-style images. Despite being fine-tuned on a specific dataset, it retains a significant amount of knowledge from the base model.
**Key Features:**
- **High-Quality Anime Generation**: Generates detailed anime-style images with sharp outlines, vibrant colors, and smooth shading.
- **Improved Character Understanding**: Better captures characters, especially those from the Danbooru dataset, resulting in more coherent and accurate character representations.
- **Enhanced Fine Details**: Accurately generates accessories, clothing textures, hairstyles, and background elements with greater clarity.
The file NetaYume_Lumina_v2_all_in_one.safetensors is an all-in-one file that contains the necessary weights for the VAE, text encoder, and image backbone to be used with ComfyUI.
---
**II. Model Components & Training Details**
- **Text Encoder**: Pre-trained **Gemma-2-2b**
- **Variational Autoencoder**: Pre-trained **Flux.1 dev's VAE**
- **Image Backbone**: Fine-tune **NetaLumina's Image Backbone**
---
**III. Suggestion**
**System Prompt:** This help you generate your desired images more easily by understanding and aligning with your prompts.
For anime-style images using Danbooru tags:
You are an assistant designed to generate anime images based on textual prompts.
You are an assistant designed to generate high-quality images based on user prompts and danbooru tags.
**Recommended Settings**
- CFG: 4–7
- Sampling Steps: 40-50
- Sampler:
- Euler a (with scheduler: normal)
- res_multistep (with scheduler: linear_quadratic)
---
**IV. Acknowledgments**
- [narugo1992](https://huggingface.co/narugo) – for the invaluable Danbooru dataset
- [Alpha-VLLM](https://huggingface.co/Alpha-VLLM) - for creating the a wonderful model!
- [Neta.art](https://huggingface.co/neta-art/Neta-Lumina) and his team – for openly sharing awesome model.
|
TiMOld/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-roaring_smooth_ibis
|
TiMOld
| 2025-09-22T10:29:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am roaring_smooth_ibis",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T09:37:39Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am roaring_smooth_ibis
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Munnzee/spotify
|
Munnzee
| 2025-09-22T10:24:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-22T10:16:32Z |
---
license: mit
---from huggingface_hub import HfApi
api = HfApi(token=os.getenv("HF_TOKEN"))
api.upload_folder(
folder_path="/path/to/local/model",
repo_id="Munnzee/spotify",
repo_type="model",
)
|
Alicia22/22SAT_KK10_l15
|
Alicia22
| 2025-09-22T10:24:17Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-22T10:21:50Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mradermacher/CORE2-llama-3.2-3b-MATH-GGUF
|
mradermacher
| 2025-09-22T10:20:20Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"grpo",
"hf_jobs",
"en",
"base_model:lhkhiem28/CORE2-llama-3.2-3b-MATH",
"base_model:quantized:lhkhiem28/CORE2-llama-3.2-3b-MATH",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T10:05:32Z |
---
base_model: lhkhiem28/CORE2-llama-3.2-3b-MATH
language:
- en
library_name: transformers
model_name: CORE2-llama-3.2-3b-MATH
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- grpo
- hf_jobs
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/lhkhiem28/CORE2-llama-3.2-3b-MATH
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#CORE2-llama-3.2-3b-MATH-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CORE2-llama-3.2-3b-MATH-GGUF/resolve/main/CORE2-llama-3.2-3b-MATH.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/CORE2-llama-3.2-3b-MATH-GGUF/resolve/main/CORE2-llama-3.2-3b-MATH.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/CORE2-llama-3.2-3b-MATH-GGUF/resolve/main/CORE2-llama-3.2-3b-MATH.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CORE2-llama-3.2-3b-MATH-GGUF/resolve/main/CORE2-llama-3.2-3b-MATH.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/CORE2-llama-3.2-3b-MATH-GGUF/resolve/main/CORE2-llama-3.2-3b-MATH.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/CORE2-llama-3.2-3b-MATH-GGUF/resolve/main/CORE2-llama-3.2-3b-MATH.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CORE2-llama-3.2-3b-MATH-GGUF/resolve/main/CORE2-llama-3.2-3b-MATH.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CORE2-llama-3.2-3b-MATH-GGUF/resolve/main/CORE2-llama-3.2-3b-MATH.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/CORE2-llama-3.2-3b-MATH-GGUF/resolve/main/CORE2-llama-3.2-3b-MATH.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/CORE2-llama-3.2-3b-MATH-GGUF/resolve/main/CORE2-llama-3.2-3b-MATH.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CORE2-llama-3.2-3b-MATH-GGUF/resolve/main/CORE2-llama-3.2-3b-MATH.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/CORE2-llama-3.2-3b-MATH-GGUF/resolve/main/CORE2-llama-3.2-3b-MATH.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Ziad177/whisper-large-qlora_
|
Ziad177
| 2025-09-22T10:14:47Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T10:14:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/solvent-eclipse-vpred-noob-ai-xl-illustrious-xl-merge-model-v30-sdxl
|
John6666
| 2025-09-22T10:14:34Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"v-pred",
"merge",
"noobai",
"obsession",
"alchemix",
"rouwei",
"Illustrious XL v2.0",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:merge:Laxhar/noobai-XL-Vpred-1.0",
"base_model:Minthy/RouWei-0.8",
"base_model:merge:Minthy/RouWei-0.8",
"base_model:OnomaAIResearch/Illustrious-XL-v2.0",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v2.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-09-22T10:03:35Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- v-pred
- merge
- noobai
- obsession
- alchemix
- rouwei
- Illustrious XL v2.0
- illustrious
base_model:
- OnomaAIResearch/Illustrious-XL-v2.0
- Laxhar/noobai-XL-Vpred-1.0
- Minthy/RouWei-0.8
---
Original model is [here](https://civitai.com/models/1513509/solventeclipse-vpred-noobaixl-illustrious-xl-merge-model?modelVersionId=2240514).
This model created by [hybskgks28275](https://civitai.com/user/hybskgks28275).
|
MattBou00/llama-3-2-1b-detox_RETRY_SAMPLING_scale10_Round3-checkpoint-epoch-100
|
MattBou00
| 2025-09-22T10:12:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T10:11:07Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-100")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-100")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-100")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
nnilayy/dreamer_stride_256-binary-arousal-Kfold-3-stride_256
|
nnilayy
| 2025-09-22T10:07:44Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-09-22T10:07:38Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
mradermacher/youtube-comments-distilbert-GGUF
|
mradermacher
| 2025-09-22T10:03:39Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:nadakandrew/youtube-comments-distilbert",
"base_model:quantized:nadakandrew/youtube-comments-distilbert",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-09-22T10:02:17Z |
---
base_model: nadakandrew/youtube-comments-distilbert
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/nadakandrew/youtube-comments-distilbert
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#youtube-comments-distilbert-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/youtube-comments-distilbert-GGUF/resolve/main/youtube-comments-distilbert.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/youtube-comments-distilbert-GGUF/resolve/main/youtube-comments-distilbert.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/youtube-comments-distilbert-GGUF/resolve/main/youtube-comments-distilbert.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/youtube-comments-distilbert-GGUF/resolve/main/youtube-comments-distilbert.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/youtube-comments-distilbert-GGUF/resolve/main/youtube-comments-distilbert.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/youtube-comments-distilbert-GGUF/resolve/main/youtube-comments-distilbert.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/youtube-comments-distilbert-GGUF/resolve/main/youtube-comments-distilbert.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/youtube-comments-distilbert-GGUF/resolve/main/youtube-comments-distilbert.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/youtube-comments-distilbert-GGUF/resolve/main/youtube-comments-distilbert.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/youtube-comments-distilbert-GGUF/resolve/main/youtube-comments-distilbert.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/youtube-comments-distilbert-GGUF/resolve/main/youtube-comments-distilbert.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/youtube-comments-distilbert-GGUF/resolve/main/youtube-comments-distilbert.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
GeraniumCat/bash-seq-to-seq
|
GeraniumCat
| 2025-09-22T09:52:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"translation",
"dataset:aelhalili/bash-commands-dataset",
"dataset:darkknight25/Linux_Terminal_Commands_Dataset",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-09-22T09:48:43Z |
---
library_name: transformers
datasets:
- aelhalili/bash-commands-dataset
- darkknight25/Linux_Terminal_Commands_Dataset
metrics:
- bleu
pipeline_tag: translation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mpasila/Poro-2-Conversational-V1-LoRA-8B
|
mpasila
| 2025-09-22T09:52:40Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"text-generation-inference",
"transformers",
"unsloth",
"llama",
"trl",
"en",
"base_model:LumiOpen/Llama-Poro-2-8B-base",
"base_model:adapter:LumiOpen/Llama-Poro-2-8B-base",
"license:llama3.1",
"region:us"
] | null | 2025-09-22T09:39:23Z |
---
base_model: LumiOpen/Llama-Poro-2-8B-base
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: llama3.1
language:
- en
library_name: peft
---
So since Unsloth is broken and doesn't let me merge the model (using a different method is probably buggy because it caused other issues) so I'll probably wait till it's fixed before retraining it.
# Uploaded model
- **Developed by:** mpasila
- **License:** Llama 3.1
- **Finetuned from model :** LumiOpen/Llama-Poro-2-8B-base
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cyberdelia/latest_sdxl_models
|
cyberdelia
| 2025-09-22T09:48:50Z | 801 | 5 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"sd-xl",
"text-to-image",
"photorealistic",
"cyberrealistic",
"image-generation",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-05-24T13:19:25Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion-xl
- sd-xl
- text-to-image
- photorealistic
- cyberrealistic
- image-generation
- diffusers
model-index:
- name: CyberRealistic Models Collection
results: []
---
Latests version of SDXL/Pony and Illustrious models
|
ArrayCats/LoRA-1.5
|
ArrayCats
| 2025-09-22T09:46:54Z | 0 | 8 | null |
[
"license:unknown",
"region:us"
] | null | 2023-07-23T00:06:13Z |
---
license: unknown
---
里面存在一些SDXL/PONY的模型,但本人未来打算分开放置,故不再更新该Models下的SDXL/PONY模型(但也不会主动删除)。
|
CharlesLi/qwen_vl_3b_seedbench_position_3x3blocks_300step
|
CharlesLi
| 2025-09-22T09:39:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-22T08:28:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sigurdur/SmolVLM-Base-ICELANDIC
|
Sigurdur
| 2025-09-22T09:39:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T09:33:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chillies/uit-dsc-2025
|
chillies
| 2025-09-22T09:38:03Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T09:34:07Z |
---
base_model: unsloth/qwen3-4b-base-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** chillies
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-base-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cyberdelia/CyberIllustrious
|
cyberdelia
| 2025-09-22T09:37:46Z | 1,140 | 8 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"sdxl",
"text-to-image",
"photorealistic",
"cyberrealistic",
"illustrious",
"image-generation",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-01-31T09:12:25Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- sdxl
- text-to-image
- photorealistic
- cyberrealistic
- illustrious
- image-generation
- diffusers
model-index:
- name: CyberIllustrious | CyberRealistic
results: []
---
# CyberIllustrious | CyberRealistic
**CyberIllustrious** (also known as **CyberRealistic**) is a photorealistic adaptation of the **Illustrious-XL** checkpoint, designed to produce stunningly realistic images with ease. Developed by [Cyberdelia](https://civitai.com/user/Cyberdelia), this model excels in generating high-quality visuals, particularly in portrait and editorial-style scenes.
---
## 🧠 Model Details
- **Model Type**: Text-to-Image Generation
- **Base Model**: Illustrious-XL
- **Format**: `safetensors`
- **Creator**: [Cyberdelia](https://civitai.com/user/Cyberdelia)
- **License**: [CreativeML Open RAIL++-M License](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
---
## ✨ Features
- **Photorealism**: Generates highly detailed and realistic images, especially effective for human subjects.
- **Ease of Use**: Achieves impressive results with straightforward prompts.
- **Integrated VAE**: Comes with a baked-in Variational Autoencoder for enhanced image quality.
- **Versatility**: Suitable for various applications, including portraits, fashion, and cinematic scenes.
---
## 🛠️ Recommended Settings
| Parameter | Recommended Value |
|-----------------|------------------------------------------------|
| Sampling Steps | 30+ |
| Sampler | DPM++ SDE Karras / DPM++ 2M Karras / Euler a |
| Resolution | 896x1152 / 832x1216 |
| CFG Scale | 5 |
| VAE | Already baked-in |
---
## 🧾 Example Prompts
> (masterpiece, best quality), ultra-detailed, realistic photo of a 22-year-old woman, natural lighting, depth of field, candid moment, color graded, RAW photo, soft cinematic bokeh
> (masterpiece, photorealistic), editorial fashion photo, close-up, dramatic side lighting, textured skin, shallow depth of field, soft shadows
---
## 📸 Example Outputs

|
LuoYiSULIXAY/lao_mlm_model
|
LuoYiSULIXAY
| 2025-09-22T09:33:41Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:GKLMIP/bert-laos-base-uncased",
"base_model:finetune:GKLMIP/bert-laos-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-09-22T09:32:06Z |
---
library_name: transformers
base_model: GKLMIP/bert-laos-base-uncased
tags:
- generated_from_trainer
model-index:
- name: lao_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lao_mlm_model
This model is a fine-tuned version of [GKLMIP/bert-laos-base-uncased](https://huggingface.co/GKLMIP/bert-laos-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
mradermacher/zeta-1.5b-sft-GGUF
|
mradermacher
| 2025-09-22T09:32:45Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"en",
"base_model:Woutermans/zeta-1.5b-sft",
"base_model:quantized:Woutermans/zeta-1.5b-sft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T09:26:45Z |
---
base_model: Woutermans/zeta-1.5b-sft
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Woutermans/zeta-1.5b-sft
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#zeta-1.5b-sft-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/zeta-1.5b-sft-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/zeta-1.5b-sft-GGUF/resolve/main/zeta-1.5b-sft.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-1.5b-sft-GGUF/resolve/main/zeta-1.5b-sft.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-1.5b-sft-GGUF/resolve/main/zeta-1.5b-sft.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/zeta-1.5b-sft-GGUF/resolve/main/zeta-1.5b-sft.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-1.5b-sft-GGUF/resolve/main/zeta-1.5b-sft.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-1.5b-sft-GGUF/resolve/main/zeta-1.5b-sft.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zeta-1.5b-sft-GGUF/resolve/main/zeta-1.5b-sft.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zeta-1.5b-sft-GGUF/resolve/main/zeta-1.5b-sft.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-1.5b-sft-GGUF/resolve/main/zeta-1.5b-sft.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-1.5b-sft-GGUF/resolve/main/zeta-1.5b-sft.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/zeta-1.5b-sft-GGUF/resolve/main/zeta-1.5b-sft.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/zeta-1.5b-sft-GGUF/resolve/main/zeta-1.5b-sft.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mohhtl/b9fd5e61-7c6e-4ae6-9323-75e2f926365e
|
mohhtl
| 2025-09-22T09:32:45Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"text-generation",
"axolotl",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"lora",
"transformers",
"dataset:train_data.json",
"base_model:unsloth/Qwen2.5-1.5B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T09:10:29Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B
tags:
- axolotl
- base_model:adapter:unsloth/Qwen2.5-1.5B
- lora
- transformers
datasets:
- train_data.json
pipeline_tag: text-generation
model-index:
- name: b9fd5e61-7c6e-4ae6-9323-75e2f926365e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.13.0.dev0`
```yaml
base_model: unsloth/Qwen2.5-1.5B
trust_remote_code: true
hub_model_id: mohhtl/b9fd5e61-7c6e-4ae6-9323-75e2f926365e
load_in_8bit: false
load_in_4bit: false
datasets:
- path: train_data.json
type:
field_instruction: "prompt"
field_output: "output"
dataset_prepared_path: ./last_run_prepared
output_dir: ./outputs/lora-out
sequence_len: 4096
sample_packing: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
gradient_accumulation_steps: 1
micro_batch_size: 4
max_steps: 500
optimizer: adamw_torch_fused
lr_scheduler: constant
learning_rate: 0.0002
bf16: auto
tf32: false
gradient_checkpointing: true
resume_from_checkpoint:
logging_steps: 1
flash_attention: true
warmup_ratio: 0.1
saves_per_epoch: 1
weight_decay: 0.0
save_first_step: true
```
</details><br>
# b9fd5e61-7c6e-4ae6-9323-75e2f926365e
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the train_data.json dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 19
- training_steps: 500
### Training results
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.1
- Pytorch 2.7.1+cu126
- Datasets 4.0.0
- Tokenizers 0.22.1
|
nnilayy/dreamer_stride_256-binary-arousal-Kfold-2-stride_256
|
nnilayy
| 2025-09-22T09:31:08Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-09-22T09:31:02Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
asulova/hamlet-dpo
|
asulova
| 2025-09-22T09:28:34Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:asulova/hamlet-merged",
"dpo",
"lora",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:2305.18290",
"base_model:asulova/hamlet-merged",
"region:us"
] |
text-generation
| 2025-09-22T09:28:08Z |
---
base_model: asulova/hamlet-merged
library_name: peft
model_name: dpo_hamlet
tags:
- base_model:adapter:asulova/hamlet-merged
- dpo
- lora
- transformers
- trl
- unsloth
licence: license
pipeline_tag: text-generation
---
# Model Card for dpo_hamlet
This model is a fine-tuned version of [asulova/hamlet-merged](https://huggingface.co/asulova/hamlet-merged).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- PEFT 0.17.1
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Desalegnn/Desu-roberta-amharic-embed-medium-45k
|
Desalegnn
| 2025-09-22T09:27:12Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:40237",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:Desalegnn/amharic-passage-retrieval-dataset",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:rasyosef/roberta-medium-amharic",
"base_model:finetune:rasyosef/roberta-medium-amharic",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-22T09:27:07Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:40237
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: rasyosef/roberta-medium-amharic
widget:
- source_sentence: የሞዴል ጥቃቅንና አነስተኛ ኢንተርፕራይዞች ኤግዚቢሽንና ባዛር የ4 ሚሊዮን ብር ሽያጭና የገበያ ትስስር
እንደሚፈጠር ተገለጸ
sentences:
- አዲስ አበባ ፣ ነሃሴ 22 ፣ 2012 (ኤፍ ቢ ሲ) ሰኔ 16 ቀን 2010 ዓ.ም በአዲስ አበባ መስቀል አደባባይ ለጠቅላይ ሚኒስትር
ዐቢይ አሕመድ በተካሄደ የድጋፍ ሰልፍ ላይ ቦምብ በመወርወር የሽብር ወንጀል የተከሰሱ አምስት ተከሳሾች የጥፋተኝነት ፍርድ ተፈረደባቸው።ተከሳሾቹ
ጌቱ ቶሎሳ፣ ብርሃኑ ጃፋር፣ ጥላሁን ጌታቸው፣ ደሳለኝ ተስፋዬ እና ባህሩ ቶላ ሲሆኑ የጥፋተኝነት ፍርዱን የፌደራሉ ከፍተኛ ፍርድ
ቤት 1ኛ የወንጀል ችሎት ነው ያስተላለፈው።የዐቃቤ ህግ ክስ እንደሚያመላክተው ተከሳሾቹ ወንጀሉን የፈጸሙት ሰኔ 16 ቀን 2010
ዓ.ም በአዲስ አባባ መስቀል አደባባይ ከረፋዱ አራት ሰአት ላይ በ40 ሜትር ርቀት አካባቢ ለጠቅላይ ሚኒስትር ዐቢይ አሕመድ
በተደረገው የድጋፍ ሰልፍ ላይ ቦንብ በመወርወር ነው።ተከሳሾቹ በ1996 ዓ.ም የወጣውን የኢፌዴሪ የወንጀል ህግ አንቀጽ 32/1ሀ
እንዲሁም አንቀጽ 38 እና የፀረ ሽብርተኝነት አዋጅ ቁጥር 652/2001 አንቀጽ 3 ስር የተመለከተውን በመተላለፍ፤ በሃገሪቱ
ያለውን ለውጥ ተከትሎ በጠቅላይ ሚኒስትር ዐቢይ የሚመራ መንግስት መኖር የለበትም በሚል የራሳቸውን አላማ ለማራመድ በማሰብ መንቀሳቀሳቸውን
ዐቃቤ ህግ በክሱ አመላክቷል።በዚህም ከ1ኛ እስከ 4ኛ ያሉ ተከሳሾች ከሱሉሉታ ከተማ መነሻቸውን በማድረግ በስልክ በመደዋወልና
በአካል በመገናኘት በድጋፍ ሰልፉ ላይ እንዴት ቦምብ መወርወር እንዳለባቸው ሲዘጋጁ ቆይተዋልም ነው ያለው ዐቃቤ ህግ፡፡በዚህ
መልኩ በ1ኛ ተከሳሽ ቤት ቡራዩ በማደር 2ኛ ተከሳሽ በሚያሽከረክረው ተሽከርካሪ 2ኛ ተከሳሽ ያዘጋጀውን ኤፍ1 ቦምብ በመያዝ
ከ3 እስከ 5ኛ ያሉ ተከሳሾች ጋር ከፒያሳ ወደ ቴድሮስ አደባባይ በመምጣትና የድጋፍ ቲሸርት ልብስ ገዝተው በመልበስ ተመሳስለው
መግባታቸው ተጠቅሷል።በድጋፍ ሰልፉ ላይ ጠቅላይ ሚኒስትር ዐቢይ ንግግር ካደረጉ በኋላ ተከሳሾቹ በ40 ሜትር ርቀት ላይ ቦምብ
የወረወሩ ሲሆን በዚህም የሁለት ሰዎች ህይወት ሲያልፍ ከ163 በላይ ሰዎች ላይ ደግሞ ከከባድ እስከ ቀላል የአካል ጉዳት እንደደረሰባቸውም
ዐቃቤ ህግ አስረድቷል፡፡የዐቃቤ ህግን የሰነድና የሰው ምስክር እንዲሁም የተከሳሾችን መከላከያ የመረመረው ፍርድ ቤቱ ተከሳሾቹን
በተከሰሱበት ወንጀል ጥፋተኛ ብሏቸዋል።በተከሳሾቹ ላይ የቅጣት ውሳኔ ለመስጠትም ለጥቅምት 17 ቀን 2013 ዓ.ም ተለዋጭ ቀጠሮ
ሰጥቷል።እስከ ጥቅምት 17 ድረስ ግን የቅጣት ማቅለያዎችን ማቅረብ እንደሚቻል ትዕዛዝ ሰጥቷል።በታሪክ አዱኛ
- 'አዲሱ ገረመው አዲስ አበባ፡- የ2013 በጀት ዓመት የ4 ሚሊዮን ብር ሽያጭና የገበያ ትስስር እንደሚፈጥር የፌዴራል የከተሞች
የስራ ዕድል ፈጠራና የምግብ ዋስትና ኤጀንሲ አስታወቀ። ከተሳታፊዎች ውስጥ 50 በመቶዎቹ ሴቶች መሆናቸው ተጠቆመ ። ኤጀንሲው
ለአዲስ ዘመን
ጋዜጣ በላከው መግለጫ
እንዳስታወቀው፤ በ2013 በጀት
አመት አንደኛው ዙር
የሞዴል ጥቃቅንና አነስተኛ
ኢንተርፕራይዞች ሀገር አቀፍ
ኤግዚቢሽንና ባዛር ‹‹ዘላቂነት
ያለው የገበያ ትስስር
ለስራ ዕድል ፈጠራና
ለኢንተርፕራይዞች ልማት መሰረት
ነው ›› በሚል
መሪ ቃል ከታህሳስ
22 እስከ ታህሳስ 28 ቀን
2013 ዓ.ም በጀሞ አንድ አደባባይ ትራፊክ መብራት ፊትለፊት ለሰባት ተከታታይ ቀናት የሚካሄድ ይሆናል። የ4 ሚሊዮን ብር ሽያጭና
የገበያ ትስስር እንዲሚፈጥርም ይጠበቃል። በኤግዚቢሽንና ባዛሩ ላይ ከሁሉም ክልሎችና ከተሞች የተውጣጡ 202 የጥቃቅን እና አነስተኛ
ኢንተርፕራይዞች 10 አነስተኛና መካከለኛ ኢንዱስትሪዎች የሚሳተፉ ሲሆን፤ ሴቶች 50 በመቶ እና አካል ጉዳተኛ ሦስት በመቶ በማሳተፍ
ምርትና አገልግሎታቸው ከ20ሺ በላይ በሚሆን ተጠቃሚ የህብረተሰብ ክፍል እንዲጎበኝ ይደረጋል ብሏል ። ባዛሩ ከተለያዩ ክልሎችና
አካባቢዎች የተሰባሰቡና በልዩ ልዩ ዘርፎች የተሰማሩ ብቁና ተወዳዳሪ ኢንተርፕራይዞችንና አንቀሳቃሾችን የሚያሳትፍ ሲሆን፤ በአንድ
ማዕከል በማገናኘት በሚፈጠረው ትውውቅና የልምድ ልውውጥ በመካከላቸው ጤናማ የውድድር ስሜት ለማቀጣጠል እንደሚያስችልም “ኤጀንሲው
አመልክቷል ። ባህላዊና ዘመናዊ የጨርቃጨርቅና
አልባሳት ምርት ውጤቶች፣
ባህላዊና ዘመናዊ የቆዳ
አልባሳትና የቆዳ ምርት
ውጤቶች፣ ባህላዊ የዕደ-ጥበባትና
ቅርጻ-ቅርጽ ሥራዎችና
ውጤቶች፣ የብረታብረት፣ የእንጨት
ሥራና የኢንጅነሪንግ ስራዎችና
ውጤቶች፣ የአግሮ-ፕሮሰሲንግ
ምርቶች እና የከተማ
ግብርና ውጤቶች፣ የቴክኖሎጂ
ውጤቶችና የፈጠራ ስራዎች፣
ፈሳሽ ሳሙና፣አልኮል፣ሳኒታይዘር፣
የአፍና አፍንጫ መሸፈኛ
ጭንብል/ማስኮች/፣
እና ሌሎችም ምርቶች
በኤግዚቢሽንና ባዛሩ እንደሚቀርቡ
አስታውቋል። የአዲስ አበባ ነጋዴ ሴቶች ማህበር፣ የሴቶች ኢንተርፕርነርሺፕ ልማት ፕሮግራም፣ ኢንተርፕርነርሺፕ ልማት ማዕከል፣
ፋሽን ዲዛይን አሶሴሽን፣ የሴቶች ራስ አገዝ ድርጅት፣ የባህልና ቱሪዝም ሚኒስቴር በዕደ ጥበብ ዘርፍ የተሰማሩ ኢንተርፕራይዞችና
ሌሎችም ተሳታፊ ኢንተርፕራይዞች እንደሚሆኑ ጠቁሟል። ሁነቱ የተሞክሮ ልውውጥና
የንግድ ልማት ግንዛቤ
ከማዳበሩም ባሻገር፤ ኢንተርፕራይዞች
ከተጠቃሚው ህብረተሰብ ጋር
በሚያደርጉት ግንኙነት ዘላቂ
የገበያ ትስስር ለመፍጠር
የሚያስችል ምቹ አጋጣሚ
ይሆንላቸዋል። ምርቶቻቸውንና አገልግሎታቸውን
ለተጠቃሚዎች በቀጥታ በመሸጥም
ተጠቃሚ እንደሚሆኑም እጀንሲው
አስታውቋል ።አዲስ ዘመን ታህሳስ 22/2013'
- የአሜሪካው ሜሪየም ዌብስተር መዝገበ ቃላት እንደ ኦክስፎርድ መዝገበ ቃላት ሁሉ ታዋቂና ዓለም አቀፍ ተቀባይነት ያለው መዝገበ
ቃላት ነው።አንዲት ወጣት ጥቁር አሜሪካዊት ታዲያ ለዚህ መዝገበ ቃላት አሳታሚ በጻፈቸው ደብዳቤ ምክንያት መዝገበ ቃላቱ ዘረኝነት
ወይም (racism) ለሚለው የእንግሊዝኛ ቃል የትርጉም ፍቺ ማሻሻያ ለማድረግ ወስኗል።
- source_sentence: የደኢሕዴን ከፍተኛ አመራሮች በሐዋሳ እየመከሩ ነው
sentences:
- 'የሁለት ዞኖች ከፍተኛ አመራሮች ታግደዋል የደቡብ ኢትዮጵያ ሕዝቦች ዴሞክራሲያዊ ንቅናቄ (ደኢሕዴን) ከፍተኛ አመራሮች ከሐሙስ
ሐምሌ 18 እስከ 22 ቀን 2011 ዓ.ም. ድረስ በሐዋሳ እየመከሩ ነው፡፡ ከፍተኛ አመራሮቹ በክልሉ ውስጥ በተከሰተው ወቅታዊ
ችግርና በአገራዊ ጉዳዮች ላይ እንደሚወያዩ፣ በተለይ በድርጅቱ ህልውና ላይ እንደሚያተኩሩም ታውቋል፡፡ የደኢሕዴን ሊቀመንበር
ወ/ሮ ሙፈሪያት ካሚል በምክክሩ ላይ ባደረጉት ንግግር፣ በአገር ደረጃና በደቡብ ክልል የፖለቲካና የፀጥታ ጉዳዮች ላይ ወጥ አቋም
ያለው አመራር አስፈላጊነትን አውስተዋል፡፡ ከዚህ አንፃርም አመራሩ ራሱን በመፈተሽ ለለውጥ ዝግጁ መሆን እንዳለበት አስታውቀዋል፡፡
እንደ ወ/ሮ ሙፈሪያት ማብራሪያ የደኢሕዴን ህልውና መረጋገጥ የሚችለው፣ አመራሩ ከመቼውም ጊዜ በላይ መንቀሳቀስ ሲችል ብቻ እንደሆነ
ነው፡፡ አመራሩ ምንም ነገር እንደማይመጣ በመኩራራት ወይም በወቅታዊ ሁኔታዎች በመሥጋት የሚቀጥል ከሆነ ውጤት እንደማይኖር፣
በወቅቱ ተጨባጭ ሁኔታ ላይ በዝርዝር በመወያየት የድርጅቱ ህልውናን ማስቀጠል ላይ ትኩረት መስጠት እንደሚገባ አስረድተዋል፡፡
ይህ በዚህ እንዳለ ደኢሕዴን የሲዳማ ዞን፣ የሐዋሳ ከተማና የሃድያ ዞን ከፍተኛ አመራሮችን ማገዱንና ለወላይታና ለካፋ ዞኖች
አመራሮች ደግሞ ማስጠንቀቂያ መስጠቱን አስታውቋል፡፡ ከክልልነት ጥያቄ ጋር በተያያዘ በተለይ በሲዳማ ዞን ወረዳዎችና በሐዋሳ
ከተማ በተፈጸሙ ጥቃቶች የበርካቶች ሕይወት ማለፉን፣ የበርካቶች ቤት ንብረት መውደሙን ተከትሎ የደቡብ ክልል በፌዴራል መንግሥት
የፀጥታ አካላት ኮማንድ ፖስት ሥር እንዲተዳደሩ መወሰኑ የሚታወስ ሲሆን፣ በዚህም ምክንያት የደኢሕዴን ሥራ አስፈጻሚ ኮሚቴ በሐዋሳ
ከተማ ባደረገው ስብሰባ የአመራሮቹን የዕግድ ውሳኔ አሳልፏል፡፡ በዚህ ስብሰባው የክልሉን የፀጥታ ሁኔታ እንደገመገመ የገለጸው
የሥራ አስፈጻሚ ኮሚቴው፣ በተፈጠረ የፀጥታ ችግሮች ሳቢያ የሲዳማ ዞንና የሐዋሳ ከተማን፣ እንዲሁም የሃዲያ ዞን ‹‹የፊት አመራሮች››
እንዳገደ አስታውቋል፡፡ በተያያዘም በወላይታና በካፋ ዞኖች እየታዩ ያሉ ሁኔታዎች የሕግ ተጠያቂነትን የሚያስከትሉ ስለሆኑ፣ አመራሩ
የሕዝቡን ደኅንነት ለማስጠበቅ እንዲሠራ ሲል አስጠንቅቋል፡፡ በዚህም ሳቢያ የሲዳማ ዞን አስተዳዳሪ አቶ ቃሬ ጫዊቻና የሐዋሳ
ከተማ ከንቲባ አቶ ሱካሬ ሹዳ መታገዳቸውን ለማወቅ ተችሏል፡፡ የሥራ አስፈጻሚ ኮሚቴው በሐዋሳና በአካባቢው ሐምሌ 11 ቀን 2011
ዓ.ም. ክልልነትን እናውጃለን በሚል በተፈጸመ ጥቃት የተጎዱ ቤተሰቦችን መልሶ ለማቋቋም እንደሚሠራ በማስታወቅ፣ የጥፋቱ ተሳታፊዎችም
ሆኑ አስተባባሪዎች የሕግ ተጠያቂ እንዲሆኑ እሠራለሁ ብሏል፡፡ አሁን ለተከሰተው ጥፋትም ሆነ እየተስተዋለ በሚገኘው ሥርዓተ አልበኝነት
ውስጥ የአመራሩ ሚና ከፍተኛ መሆኑን ያመነው የሥራ አስፈጻሚ ኮሚቴው፣ ይኼንን ለማረም ከሥራ አስፈጻሚ እስከ ታችኛው የአመራር
ሥርዓት ድረስ ፈትሾ ዕርምጃ እንደሚወስድ ቃል ገብቷል፡፡ '
- 'አዲስ አበባ፣ ጥር 2፣ 2012 (ኤፍ.ቢ.ሲ) በፓኪስታን ደቡብ ምእራብ ኩዌታ ከተማ በመስጊድ ላይ በተፈፀመ የቦብም ጥቃት
የሞቱ ሰዎች ቁጥር 15 መድረሱን ፖሊስ አስታወቀ።በአርብ ፀሎት ላይ በነበሩ ሰዎች ላይ በተፈፀመው የቦምብ ጥቃቱ ከሞቱት ሰዎች
በተጨማሪም ከ20 በላይ ሰዎች ላይ የተለያየ መጠን ያለው ጉዳት መድረሱንም ነው የገለፀው።በመስጊድ ላይ ለተፈፀመው ጥቃትም በአካባቢው
የሚንቀሳቀሰው የአሸባሪው ኢስላሚክ ስቴት (አይ.ኤስ) ቡድን ኃላፊነት መውሰዱ ተነገሯል።በሽብር ጥቃቱ በአፍጋኒስታን የሚንቀሳቀሰው
የታሊባን ቡድን አመራሮች ተገድለዋል ቢባልም፤ ታሊባን ግን አመራሮቼ ላይ ጉዳት አልደረሰም ሲል አስተባብሏል።ምንጭ፦ '
- በኢትዮጵያ ፕሪምየር ሊግ ዘጠነኛ ሳምንት መቐለ 70 እንደርታ በሜዳው ሲዳማ ቡናን 3-1 ካሸነፈ በኋላ የሁለቱ ቡድኖች አሰልጣኞች
አስተያየታቸውን ሰጥተዋል። ” ሲዳማ ቡና በጥሩ ወቅታዊ አቋም የሚገኝ ቡድን በመሆኑ ጨዋታው ከባድ ነበር” – ገ/መድኅን ኃይሌ
– መቐለ 70 እንደርታስለ ጨዋታው” ጨዋታው ከባድ ነበር፤ ሲዳማ ቡና በጥሩ ወቅታዊ አቋም የሚገኝ ቡድን ነው ፤ የያዙት ነጥብም
ለዚህ ጨዋታ ጥሩ የስነልቦና ጥንካሬ አስገኝቶላቸዋል። በአንፃሩ እኛ አራት ጨዋታዎች ሳናሸንፍ ነው ወደ ጨዋታው የገባነው። በዚ
ምክንያት ጨዋታው አክብዶብን ነበር። በአጠቃላይ ጨዋታውን አሸንፈናል። በቀጣይ ጨዋታዎች ቀስ በቀሰ ወደ አሸናፊነት መጥተን ይህን
እናስቀጥላለን። ”“ዳኝነት ላይ ያየሁት ነገር ጥሩ አይደለም” ዘርዓይ ሙሉ – ሲዳማ ቡና ስለ ጨዋታው ” ከዕረፍት በፊት ከጨዋታ
ውጪ ኳሱ በኋላ ተጫዋቾቻችን መረጋጋት አልቻሉም። በጨዋታው አሳፋሪ ዳኝነት ነው ያየሁት። ስለ ጨዋታው ብጠይቀኝ አሳፋሪ እና
ሚዛናዊት የሌለው ዳኝነት ነው። የተቆጠርቡን ግቦች እኛ ላይ ጥፋት እየተፈፀሙ የተቆጠሩ ናቸው። ከጨዋታ ውጭ ሆኖም ግብ ይቆጠራል።
በቃ ይህንን ነው ያየሁት። ከዚ ውጭ ግን መቐለ ለማሸነፍ የነበረው ተነሳሽነት ጥሩ ነበር። እንደ ቡድን ተንቀሳቅሰዋል እኛም
የተሻለ ኳስ ተቆጣጥረን ተጫውተናል። እንዳያችሁት ኳሱን መስርተን ነው የወጣነው ግን በተለያዩ ስህተቶች ግብ ሲቆጠርብን የተጫዋቾቻችን
ብቃት አወረደው። የምንፈልገው እንቅስቃሴ ያላደረግነው በዳኞች ምክንያት ነው። ገና በሰባተኛ ደቂቃ ነው የተጀመረው ይሄ ነገር።
ጨዋታው ጥሩ ሆኖ ሳለ ሚዛኑ የጠበቀ ዳኝነት አላየንም። ዳኝነቱ ልክ ካልሆነ የጨዋታው እንቅስቃሴ እንዳለ ይበላሻል ይሄ ሁሉ
ደጋፊ የገባው ጥሩ ጨዋታ ለማየት ነው። ለምንድነው ተጫዋቾች ሮጠው ዳኛ ላይ የሚሄዱት። በተደጋጋሚ ስህተት ይሰራ ነበር። እኛ
ተጫዋቾቻችንን ብናረጋጋም የሚያደርጉት ስህተት ለሌላ ነገር የሚዳርግ ነበር። ዳኞቹ አቅም አንሷቸው ነው ብዬ አላስብም፤ ሆን
ተብሎ የተደረገ ነገር ነው። ዳኝነት ላይ ያየሁት ነገር ጥሩ አይደለም። መቐለን ግን እንደ ቡድን ጥሩ ነው እንኳን ደስ አላቹ
ማለት እፈልጋለው። ”ስለ ስታድየሙ ድባብ” ደጋፊው የሚደነቅ ደጋፊ ነው። በስርዓት ነው ቡድኑን የሚደግፈው። ምንም ነገር ቢፈጠር
ቡድኑን ነበር ሲደግፍ የነበረው። ”ዳኝነት ላይ ስለሰጠው አስተያየት” እኔ አዳላ አላልኩም። ግን ብቃት ማነስ ነው ብዬ አላስብም።
እነዚህ ሁሉ ግቦች እስኪቆጠሩ ብቃት ማነስ አይደለም። በአጠቃላይ ዳኝነቱ ሚዘናዊ አልነበረም። ሁሉም ግብ ላይ የዳኛ ተፅዕኖ
አለበት፤ በቃ ይሄን ነው የምለው። አንዱን ከጨዋታ ውጪ ብለህ አንዱን የምታፀድቅ ከሆነ ስህተት ነው። “
- source_sentence: የከምባታና ጠንባሮ አርሶአደሮች
sentences:
- በደሴ ማረሚያ ቤት በተደረገ የኮቪድ-19 ምርመራ 13 ሰዎች ቫይረሱ እንዳለባቸው ማረጋገጡን የከተማው ጤና መምሪያ አስታወቀ።የመምሪያው
ኃላፊ አቶ አብዱልሃሚድ ይመር በተለይ ለቢቢሲ እንዳስታወቁት 12ቱ የህግ ታራሚዎች ሲሆኑ ሌላኛው ደግሞ የማረሚያ ቤቱ ባልደረባ
ናቸው።እንደ አቶ አብዱልሃሚድ ገለጻ ከሆነ ከማረሚያ ቤቱ ጋር በመነጋገርም አዲስ የሚገቡ ታራሚዎች ለ14 ቀናት ለብቻቸው እንዲቆዩ
ከማድረግ በተጨማሪ በመጨረሻዎቹ ቀናት ላይ ምርመራ ሲደረግላቸው ቆይቷል።ከሐምሌ 20 በኋላ ማረሚያ ቤቱ የገቡ 46 ታራሚዎች
ላይ በተደረገ ምርመራ 10 ሰዎች ኮሮናቫይረስ እንዳለባቸው ለማረጋገጥ ተችሏል።“ታራሚዎቹ ከተለያዩ አካባቢዎች የመጡ ናቸው።
ከተለያዩ ከደቡብ ወሎ ወረዳዎች እና ከደሴ ከተማም የተገኙ ናቸው” ብለዋል።በሁለተኛ ዙር 60 ሰዎች ላይ በተደረገ ምርመራ ሦስቱ
ቫይረሱ እንዳለባቸው ተረጋግጧል።በሁለተኛው ዙር ቫይረሱ ከተገኘባቸው መካከል በመጀመሪያው ዙር እንዳለባቸው ከታወቁ ሰዎች ጋር
ንክኪ የነበራቸው እና አንድ ማረሚያ ቤቱ ባልደረባ ይገኙበታል።የማረሚያ ቤቱን የሕግ ታራሚዎች እና ባልደረባዎችን በሙሉ ለመመርመር
መቻሉንም አቶ አብዱልሃሚድ አስታውቀዋል።ቫይረሱ የተገኘባቸው ቦሩ ሜዳ መጀመሪያ ደረጃ ሆስፒታል የተላኩ ሲሆን፤ ተጓዳኝ ህመም
ያለበት አንድ ታራሚ ካሳየው የህመም ምልክት ውጭ ሁሉም በጥሩ ሁኔታ ላይ እንደሚገኙ ተናግረዋል።በማረሚያ ቤቱ የቫይረሱ ስርጭት
እንዳይስፋፋ አዲስ የሚገቡትን እና ነባር ታራሚዎችን ከመመርመር ባለፈ የግንዛቤ ማስጨበጫ ሥራ፣ የኬሚካል ርጭት፣ ርቀትን ማስጠበቅ
እና ንጽህና የማስጠበቅ ሥራ እየተከናወነ ነው ብለዋል።ባለፉት ወራት በአማራ ክልል በተደረገ የኮሮናቫይረስ ምርመራ 83 አሽከርካሪዎች
እና ረዳቶቻቸው ቫይረሱ ተገኝቶባቸዋል።በክልሉ ቫይረሱ ከተገኘባቸው ሰዎች መካካል 23 የህክምና ባለሙያዎች እንደሚገኙበትም ከአማራ
ህብረተሰብ ጤና ኢንስቲትዩት ያገኘነው መረጃ ያሳያል።በአጠቃላይ በኢትዮጵያ በኮቪድ-19 የተያዙ ሰዎች ቁጥር 25,118 የደረሱ
ሲሆን የሟቾች ቁጥር 463 ደርሷል። እንዲሁም አጠቃላይ ከበሽታው ያገገሙ ሰዎች 11,034 ደርሰዋል።
- 'በደቡብ ክልል ከፋ ዞን ዴቻ ወረዳ ከ20 ሺህ በላይ የከምባታና ጠምባሮ አርሶአደሮች በማንነታችን ጥቃት ደርሶብናል በማለት
እየተፈናቀሉ ናቸው፡፡አርሶአደሮቹ የተፈናቀሉት ከሶስት ሳምንት በፊት በወረዳው ከ30 በላይ ሲቪሎች በታጠቁ ግለሰቦች በአሰቃቂ
ሁኔታ መገደላቸውን ተከትሎ ነው ተብሏል፡፡ጉዳያችንን ለክልሉ መንግሥት ብናሳውቅም ችላ ተብለናል ሲሉ አርሶአደቹ ተናግረዋል።
አሁን ለችግር መጋለጣቸውንም ለቪኦኤ አስረድተዋል፡፡የከምባታ ጠንባሮ ዞን በበኩሉ የተፈናቀሉ ዜጎች በስቃይ ላይ መሆናቸውን ገልጦ
መፍትሔ እየተፈለገ መሆኑን አስታውቋል፡፡ '
- ባሕር ዳር፡ መስከረም 7/2012 ዓ.ም (አብመድ) በጣልያን ባሕር ዳርቻ ጠባቂዎች ሕይወታቸው የተረፉ 90 ስደተኞችን ማልታ
ለመቀበል ተስማማች፡፡በቀጣዩ ሳምንት ደግሞ በአዲስ የስደተኞች መከፋፈያ አሠራር ዘዴ ላይ የአውሮፓ ኅብረት ሊመክር ነው፡፡የማልታ
የሕይወት አድን ትብብር ማዕከል በጠየቀው መሠረት ትናንት የጣልያን ባሕር ዳርቻ ጠባቂ ቡድን ስደተኞቹን ታድጓል፡፡ ከሊቢያ የባሕር
ክልል ውጭ እየሰመጠች ከነበረች ጀልባ ነው ስደተኞቹን ማትረፍ የተቻለው፡፡ ማልታ በመጀመሪያ ስደተኞቹን ወደ ሀገሯ ለማስገባት
ፈቃደኛ አልሆነችም ነበር፡፡
- source_sentence: የአዲስ አበባ ከተማ አስተዳደር የጀመረው ኦዲት ወደ ባለ ኮከብ ሆቴሎችና ኢንዱስትሪዎች ተሸጋገረ
sentences:
- የኢትዮጵያ እግር ኳስ ፌዴሬሽን ከኢትዮጵያ ብሮድካስቲንግ ኮርፖሬሽን (EBC) ጋር በተፈራረመው የመግባቢያ ሰነድ ስምምነት ዙሪያ
ከፕሪሚየር ሊግ ክለቦች ጋር ነገ ከጠዋቱ 4፡00 ጀምሮ በኢንተርኮንትኔንታል ሆቴል ውይይት ያካሂዳል፡፡በውይይቱ ፌዴሬሽኑና EBC
የኢትዮጵያ ፕሪሚየር ሊግ ጨዋታዎችን በቀጥታ የተሌቭዥን ስርጭት አማካኝነት በመላ ኢትዮጵያ ተደራሽ ለማድረግ ነሃሴ 6/2007
ዓ.ም የተፈራረሙትን የመግባቢያ ሰነድ አስመልክቶ ስለ ስምምነቱ ፋይዳና ሂደት ገለፃ የሚደረግ ሲሆን ከፕሪሚየር ሊግ ክለቦች
ለሚነሱ ጥያቄዎች ማብራሪያ ይሰጣል፡፡ በክለቦች መብትና ተጠቃሚነት ዙሪያም ግልጽ ውይይት ይካሄዳል፡፡ስምምነቱ ይፋ መደረጉንና
መፈረሙን ተከትሎ ከተለያዩ በላድርሻ አከላት የተነሱት ጥያቄዎች በተለይም የኢትዮጵያ ቡና ስፖርት ክለብ በደብዳቤ አቋሙን የገለጸበት
አግባብ ተቀባይነት እንዳለው ታምኖበታል፡፡ ነገ ከጠዋቱ 4፡00 ጀምሮ የሚካሄደውና የፕሪሚየር ሊግ ክለቦች ፕሬዝዳንቶች እና
ስራ አስኪያጆች የሚሳተፉበት የውይይት መድረክ ስምምነቱን አስመልክቶ ሊነሱ የሚችሉትን ጥያቄዎች በመቀበል የማስተካካያ ርምጃ
ለመውሰድ የሚያስችል በመሆኑ ሁሉም ክለቦች የውይይቱ ተሳታፊ እንዲሆኑ ፌዴሬሽኑ ጥሪውን አስተላልፋል፡፡ፌዴሬሽኑና ኢቢሲ አለም
አቀፍና የሀገር ውስጥ ጨዋታዎችን በቴሌቭዥን የቀጥታ ስርጭት ለማስተላለፍ የተፈራረሙት የመግባቢያ ሰነድ ዓላማዎች በዋነኝነት
የወጣቱን ትውልድ የእግር ኳስ ስፖርት ተነሳሽነት ማሳደግ፣ የብሔራዊ እና አገር ውስጥ ውድድሮችን የቀጥታ ስርጭት ተደራሽነት
ማረጋገጥ እንዲሁም ለእግር ኳስ ስፖርት ዘላቂና አስተማማኝ እድገት አመቺ ሁኔታዎችን በመፍጠር ላይ እንደሚመሰረት መገለጹ ይታወሳል፡፡ማስታወሻ፡-
በውይይቱ የሚሳተፉት የፌዴሬሽኑ የስራ ሃላፊዎችና የክለቦች ተወካዮች ብቻ ናቸው፡፡
- ለመጀመርያ ጊዜ በተሟላ ደረጃ መሬትና መሬት ነክ ይዞታዎችን ኦዲት በማድረግ ላይ የሚገኘው የአዲስ አበባ ከተማ አስተዳደር፣
የኦዲት አድማሱን በማስፋት በባለ ኮከብ ሆቴሎችና በኢንዱስትሪዎች ላይ ቆጠራ ሊያካሂድ ነው፡፡ የአዲስ አበባ ከተማ አስተዳደር
ከ1995 ዓ.ም. ጀምሮ እስከ ኅዳር 2004 ዓ.ም. የከተማ ቦታ በሊዝ ስለመያዝ የሚደነግገው እስኪወጣበት ጊዜ ድረስ፣ ላለፉት
15 ዓመታት በኢንዱስትሪ ዞኖችና በተናጠል ለሚካሄዱ ፋብሪካዎች በርካታ ቦታዎችን ሰጥቷል፡፡ ከዚህ በተጨማሪ ለበርካታ ሆቴሎች
ግንባታ የሚሆን ሰፋፊ ቦታዎችንም እንዲሁ አቅርቧል፡፡ነገር ግን አስተዳደሩ በሰጣቸው ቦታዎች ላይ ስለተከናወነው ልማትም ሆነ፣
የተከናወኑት ግንባታዎች በውላቸው መሠረት ስለመካሄዳቸው በትክክል የተጠናቀረ መረጃ እንደሌለ ይገልጻል፡፡በከተማው ውስጥ የሚገኙ
አምራች ኢንዱስትሪዎችንና ባለ ኮከብ ሆቴሎችን ቁጥር ለማወቅ፣ በአግባቡ ሥራዎችን ባላካሄዱት ላይ ደግሞ የማስተካከያ ዕርምጃ
ለመውሰድ ኦዲት እንደሚከናወን ለማወቅ ተችሏል፡፡የአዲስ አበባ ከተማ አስተዳደር ምክትል ከንቲባ ታከለ ኡማ (ኢንጂነር) ለሪፖርተር፣
‹‹እስካሁን ግንባታ ሳይካሄድባቸው ለዓመታት ታጥረው የቆዩ ከአራት ሚሊዮን ካሬ ሜትር በላይ ቦታ መልሰን ወስደናል፤›› ብለዋል፡፡‹‹‹ይህ
ትልቅ ሥራ ነው፤›› በማለት ምክትል ከንቲባው ገልጸው፣ በቀጣይ ደግሞ በሆቴሎች፣ በኢንዱስትሪዎች፣ በድንጋይ ማምረቻ ካባዎች፣
እንዲሁም በመኖሪያ ቤቶች ላይ ኦዲት ተካሂዶ ዕርምጃ ይወሰዳል ሲሉ ገልጸዋል፡፡ ‹‹ሥራው ውስብስብ በመሆኑ የሚካሄደው ኦዲት
አንዴ ብቻ ሳይሆን ሦስት፣ አራት ጊዜ ይታያል፡፡ ካስፈለገም የማረጋገጡን ሥራ ማዕከላዊ ስታትስቲክስ ኤጀንሲ ሊያከናውን ይችላል፤››
በማለት ምክትል ከንቲባው አስረድተዋል፡፡በአዲስ አበባ ከተማ አምራች ኢንዱስትሪዎች፣ ሆቴሎች፣ ለድንጋይ ማውጪያ የተሰጡ ቦታዎች
ያሉበት ወቅታዊ ሁኔታ በትክክል አይታወቅም፡፡ ለእነዚህ ዘርፎች የቀረበው ቦታ ለታለመለት ዓላማ በትክክል ስለመዋሉ፣ ከዘርፉ
የሚመነጨው ኢኮኖሚም ሆነ የተፈጠረው የሥራ ዕድል ሽፋን እምብዛም አይታወቅም፡፡ይህንን ሥራ በተሻለ ደረጃ ለመሥራት የከተማው
ኢንዱስትሪ ቢሮ ከማዕከላዊ ስታትስቲክስ ኤጀንሲ ጋር በጋራ ለመሥራትም መስማማታቸው ታውቋል፡፡ የማዕከላዊ ስታትስቲክስ ኤጀንሲ
የቢዝነስ ስታትስቲክስ ዳይሬክተር አቶ ዘለዓለም ኃይለ ጊዮርጊስ፣ በሆቴሎችና በኢንዱስትሪዎች ላይ ቆጠራውን ለማካሄድ ሙሉ ዝግጅት
እየተደረገ መሆኑን ለሪፖርተር ገልጸው፣ በጉዳዩ ላይ ዝርዝር መረጃ ከመስጠት ተቆጥበዋል፡፡
- ጠቅላይ ሚኒስትር ዶክተር አብይ አህመድ ለተለያዩ የመንግስት የስራ ሀላፊዎች ሹመት መስጠታቸውን የጠቅላይ ሚኒስቴር ጽህፈት ቤት
አስታውቋል።በጠቅላይ ሚኒስትር ጽህፈት ቤት መግለጫ መሰረት፦ 1.ዶክተር አምባቸው መኮንን፦ የጠቅላይ ሚንስትሩ የመሰረተ ልማትና
የከተማ ልማት አማካሪ ሚንስትር 2.አቶ ገብረእግዚአብሔር አርአያ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት
ረዳት ተጠሪ 3.አቶ ጫኔ ሽመካ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት ረዳት ተጠሪ 4.አቶ ጫላ
ለሚ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት ረዳት ተጠሪ5.አቶ ተስፋሁን ጎበዛይ፦ የጠቅላይ ሚንስትሩ
የብሔራዊ ደህንነት ጉዳዮች አማካሪ ሚንስትር ዴኤታ6.ብርጋዴል ጄኔራል አህመድ ሀምዛ፦ የብረታ ብረት ኢንጂነሪንግ ኮርፖሬሽን
ዋና ዳይሬክተር7.አቶ ሞቱማ መቃሳ፦ የጠቅላይ ሚንስትሩ የብሔራዊ ደህንነት ጉዳዮች አማካሪ ሚንስትር ዴኤታ8.አቶ ከበደ ይማም፦
የአካባቢ ጥበቃ ደንና የአየር ንብረት ለውጥ ኮሚሽን ምክትል ኮሚሽነር9.አቶ አዘዘው ጫኔ፦ የጉምሩክ ኮሚሽን ምክትል ኮሚሽነር10.አቶ
አወል አብዲ፦ የብረታ ብረት ኢንጂነሪንግ ኮርፖሬሽን ምክትል ዋና ዳይሬክተር11.አቶ ሙሉጌታ በየነ፦ የጉምሩክ ኮሚሽን ምክትል
ኮሚሽነር12. ዶክተር ፅጌረዳ ክፍሌ፦ የብሔራዊ ኤች. አይ. ቪ/ኤድስ መከላከያና መቆጣጠሪያ ጽ/ቤት ዋና ዳይሬክተር13.ወይዘሮ
ያምሮት አንዱዓለም፦ የአርማወር ሐሰን የምርምር ኢንስቲትዩት ምክትል ዋና ዳይሬክተር14.ዶክተር ሚዛን ኪሮስ፦ የኢትዮጵያ ጤና
መድህን ኤጀንሲ ዋና ዳይሬክተር15.አቶ ሀሚድ ከኒሶ፦ የሰነዶች ማረጋገጫና ምዝገባ ኤጀንሲ ምክትል ዋና ዳይሬክተር16.አቶ ከበደ
ጫኔ፦ የስደተኞችና ከስደት ተመላሾች ጉዳይ ኤጀንሲ ዋና ዳይሬክተር17.ወይዘሮ ምስራቅ ማሞ፦ የጉምሩክ ኮሚሽን ምክትል ኮሚሽነር
ሆነው ተሹመዋል።
- source_sentence: በቁጥጥር ስር የዋሉ የህወሓት ታጣቂዎች ልዩ ኃይሉና ወጣቱ የጥፋት ቡድኑ እኩይ ዓላማ ማስፈጸሚያ ከመሆን
እንዲቆጠቡ አስገነዘቡ
sentences:
- 'የፕሬዚዳንት ዶናልድ ትራምፕ ተቺዎች እንደሚሉት፤ ፕሬዚዳንቱ ለዘመናት የአሜሪካ ወዳጆች በሆኑት ኢትዮጵያ እና ግብፅ መካከል
ታላቁ የሕዳሴ ግድብን በተመለከተ ውጥረት ቀስቅሰዋል።ይህም በአሜሪካ እና በአፍሪካ የዲፕሎማሲ ታሪክ ትልቁ የትራምፕ ስህተት
ነው ይላሉ።ትራምፕ ከቀናት በፊት ግብፅ "ግድቡን ልታፈነዳው ትችላለች" ማለታቸው ይታወሳል። ጥር ላይ ፕሬዚዳንቱ "ስምምነት
መፍጠር ችያለሁ፤ ከባድ ጦርነትም አስቁሜያለሁ" ብለው የኖቤል የሰላም ሽልማት እንደሚገባቸው መናገራቸው ይታወሳል።ነገር ግን
ተሸላሚ የሆኑት ጠቅላይ ሚንስትር ዐብይ አሕመድ ነበሩ ።ትራምፕ የኖቤል የሰላም ሽልማት እንደሚገባቸው ሲናገሩ ጉዳዩን ግልፅ
ባያደርጉትም፤ በግብፁ ፕሬዘዳንት አብዱልፈታህ አል-ሲሲ ጥሪ መሠረት በኢትዮጵያ እና በግብፅ መካከል ጣልቃ ስለመግባታቸው እየተናገሩ
እንደነበረ ይታመናል።ትራምፕ በአንድ ወቅት አብዱልፈታህ አል-ሲሲን "የኔ ምርጡ አምባገነን" ማለታቸው አይዘነጋም።ግብፅ ታላቁ
ሕዳሴ ግድብ "ለደህንነቴ ያሰጋኛል" ትላለች። ሱዳንም የግብፅን ያህል ባይሆንም ስጋቱን ትጋራለች። በሌላ በኩል ኢትዮጵያ የኃይል
አመንጪውን ግድብ አስፈላጊነት አስረግጣ ትገልጻለች።ኬንያ የሚገኘው የአፍሪካ ቀንድ የጸጥታ ጉዳይ ተንታኝ ረሺድ አብዲ እንደሚለው፤
በግድቡ ዙሪያ ኢትዮጵያ እና ግብፅን ለማደራደር አሜሪካ ጣልቃ መግባቷ የሁለቱን አገሮች ውጥረት አባብሷል።"ኢትዮጵያ በግድቡ
አቅራቢያ የጸጥታ ኃይሏን እያጠናከረች ነው። ቤንሻንጉል ጉሙዝ ክልልን ከበረራ ውጪ ማድረጓ አንዱ ማሳያ ነው። በግድቡ ዙሪያ
በረራ የሚያግድ መሣሪያም ተገጥሟል። ግብፅ የወታደራዊ ቅኝት በረራ ልታደርግ እንደምትችል ከመስጋት የመነጨ ሊሆን ይችላል" ይላል።ተንታኙ
እንደሚናገረው፤ ትራምፕ ዓለም አቀፍ ዲፕሎማሲ እንዴት እንደሚሠራ የሚገነዘቡ አይመስልም።"በንግዱ ዓለም እንደሚደረገው ስምምነት
ላይ መድረስ ይቻላል የሚል የተዛባ አመለካከት አላቸው። የውጪ ጉዳይ መያዝ ያለበትን ጉዳይ ግምዣ ቤት ድርድሩን እንዲመራ ያደረጉትም
ለዚህ ነው። ከመነሻውም መጥፎ የነበረውን ሁኔታም አባብሶታል" ሲልም ረሺድ ያስረዳል።ኢትዮጵያ ከግብፅ እና ከሱዳን ጋር ያለው
ድርድር ሳይቋጭ ግድቡን ለመሙላት በመወሰኗ አሜሪካ የ100 ሚሊዮን ዶላር እርዳታ ማጠፏ ተዘግቧል።ረሺድ "ኢትዮጵያ አሜሪካ እንደከዳቻት
ይሰማታል። ብዙ ኢትዮጵያውያን ትራምፕን የጥላቻ ምልክት አድርገውታል" በማለት ሁኔታውን ይገልጻል።የዴሞክራት እጩው ጆ ባይደን
እንዲያሸንፉም የበርካታ ኢትዮጵያውያን ምኞት ነው።አሜሪካ የሚገኘው ሴንተር ፎር ግሎባል ዴቨሎፕመንት ውስጥ የፖሊሲ አጥኚ ደብሊው
ጉዬ ሙር እንደሚሉት፤ የትራምፕ አስተዳደር እስራኤልና የአረብ ሊግ አገራት መካከል ሰላም መፍጠር ስለሚፈልግ ከግብፅ ጎን መቆሙ
የሚጠበቅ ነው።ግብፅ ከእስራኤል ጋር ዘመናት ያስቆጠረ ዲፕሎማሲያዊ ትስስር አላት። ትራምፕ የአረብ ሊግ አገራት ለእስራኤል እውቅና
እንዲሰጡ ጥረት እያደረጉ ስለሆነ አብዱልፈታህ አል-ሲሲን ማስቀየም አይፈልጉም።ሙር እንደሚናገሩት፤ የትራምፕ አስተዳደር በግድቡ
ዙርያ ለግብፅ የወገነውም በዚህ ምክንያት ነው።ትራምፕ ሱዳንን በተመለከተ የደረሱበት ውሳኔ የአረቡን አገራት ከእስራኤል ጋር
ለማስስማት የሚያደርጉት ጥረት አንድ አካል ነው።ሱዳን ከእስራኤል ጋር ስምምነት ለማድረግ ወስናለች።በእርግጥ የአገሪቱ ተጠባባቂ
የውጪ ጉዳይ ሚንስትር ውሳኔው ገና በሕግ አውጪ መጽደቅ እንዳለበት ቢናገሩም፤ ሱዳን እንደ ጎርጎሮሳውያኑ 1967 ላይ የአረብ
ሊግ አገራት ውይይት ማስተናገዷ መዘንጋት የለበትም። በውይይቱ "ከእስራኤል ጋር መቼም ሰላም አይፈጠርም። መቼም ቢሆን ለእስራኤል
እውቅና አይሰጥም። ድርድርም አይካሄድም" ተብሎም ነበር።ሱዳን ከእስራኤል ጋር ለመስማማት በመፍቀዷ ትራምፕ ሽብርን ከሚድፉ አገሮች
ዝርዝር እንደሚያስወጧት ተናግረዋል። ይህም ለምጣኔ ሀብቷ ማገገም የሚረዳ ድጋፍ እንድታገኝ ያግዛታል።ትራምፕ በድጋሚ ከተመረጡ
ኢትዮጵያ ግድቡን በተመለከተ ሱዳን እና ግብፅ ላላቸው ስጋት አንዳች መልስ እንድትሰጥ ጫና እንደሚያደርጉ ይጠበቃል።አጥኚው እንደሚሉት፤
ሱዳን ሽብርን ከሚደግፉ አገሮች ዝርዝር ከወጣች የትራምፕ አስተዳደር በምላሹ የሚጠብቀው ነገር አለ።"ከእስራኤል ጋር ስምምነት
የመፍጠር ጉዳይ የሱዳን ማኅበረሰብን የከፋፈለ ነው። መንግሥት የራሱ የጸጥታ ጥያቄዎች እያሉበት ይህን ውሳኔ ማሳለፉ ችግር ሊያስከትል
ይችላል" ብለዋል። ትራምፕ አፍሪካን በተመለከተ የሚያራምዱት ፖሊሲ፤ በአሜሪካ እና በቻይና መካከል የሚካሄድ ''አዲሱ ቀዝቃዛ
ጦርነት'' ነው ሲል ረሺድ ይገልጸዋል።ለምሳሌ ቻይና ከግዛቷ ውጪ የመጀመሪያውን ወታደራዊ መቀመጫ የከፈተችው በጅቡቲ ነው። ማዕከሉ
የሚገኘው አሜሪካ የሶማሊያ ታጣቂዎች ላይ የአየር ጥቃት ለመሰንዘር ያቋቋመችው ማዕከል አቅራቢያ ነው።በቅርቡ የአሜሪካ ተዋጊ
ጀቶች ለማረፍ ሲሞክሩ፤ ቻይና የአሜሪካውያን ወታደሮችን እይታ የሚጋርድ መሣሪያ መሞከሯን ረሺድ ያጣቅሳል። "የትራምፕ አስተዳደር
ጸረ ቻይና ፖሊስ ያራምዳል" የሚለው ተንታኙ ሁኔታው ለአፍሪካ ቀንድ አስቸጋሪ መሆኑንም ያስረዳል።ቻይና አፍሪካ ውስጥ ያላትን
የንግድ የበላይነት ለመቀልበስ፤ የትራምፕ አስተዳደር ''ፕሮስፔሪቲ አፍሪካ ኢን 2018'' የተባለ ፖሊሲ ነድፏል።በአፍሪካ እና
በአሜሪካ መካከል የሚካሄደውን ንግድ በእጥፍ የማሳደግ እቅድ አለ። አምና የአሜሪካ መንግሥት የንግድ ተቋሞች አፍሪካ ውስጥ እንዲሠሩ
የገንዘብ ድጋፍ የሚሰጥበት አሠራር ዘርግቷል።ሙር እንደሚሉት፤ የአሜሪካ ድርጅቶች ከቻይና ተቋሞች ጋር መወዳደር አልቻልንም ብለው
ቅሬታ ስላሰሙ የገንዘብ ድጋፍ ለመስጠት ተወስኗል። "የአይቲ ዘርፍ እንደ ማሳያ ቢወሰድ፤ 70 በመቶ የአፍሪካ ኢንፎርሜሽን ቴክኖሎጂ
የተመሠረተው በቻይና ድርጅቶች ላይ ነው" ሲሉ ያብራራሉ። የትራምፕ አስተዳደር በ2025 የሚያበቃውን ከ30 በላይ የአፍሪካ አገሮች
ተጠቃሚ እንዲሆኑበት ታስቦ በአሜሪካ ለአፍሪካውያን የተሰጠው ከታሪፍና ከቀረጥ ነፃ የገበያ ዕድል (አፍሪካ ግሮዝ ኤንድ ኦፖርቹኒቲ
አክት-አጎዋ) የመሰረዝ እቅድ አለው። ለአፍሪካ ምርቶች የአሜሪካን ገበያ ክፍት የሚያደርገው ስምምነት የተፈረመው በቢል ክሊንተን
ነበር።አሜሪካ አሁን ላይ ትኩረቷ የሁለትዮሽ የንግድ ስምምነት እንደሆነ ሙር ይናገራሉ። ለምሳሌ ከኬንያ ጋር ንግግር እየተካሄደ
ነው።ኬንያ፤ የቻይና ''ቤልት ኤንድ ሮድ ኢኒሽየቲቭ'' አካል እንደሆነች ይታወቃል። ስምምነቱ ቻይናን ከአፍሪካ ጋር በንግድ
የሚያስተሳስርና የቻይና ዓለም አቀፍ ተደማጭነት የሚያጎላ እንደሆነ አሜሪካ ታምናለች።ትራምፕ ከኬንያ ጋር በቀጥታ ከተስማሙ በኋላ
ተመሳሳይ መንገድ ተጠቅመው ከሌሎች የአፍሪካ አገሮች ጋር የመሥራት ውጥን እንዳላቸው ሙር ይናገራሉ።ይህ የትራምፕ መንገድ፤ ከአፍሪካ
ሕብረት የንድግና ኢንዱስትሪ ኮሚሽነር አልበርት ሙቻንጋን ሐሳብ ጋር ይጣረሳል።እሳቸው የአፍሪካ አገራት በተናጠል ሳይሆን በአንድነት
ከአሜሪካ ጋር ስምምነት እንዲያደርጉ ይፈልጋሉ። ሙር እንደሚሉት፤ የአሜሪካ ውሳኔ የአፍሪካ ሕብረት የአህጉሪቱን ምጣኔ ሀብት
ለማጣመር ከሚያደርገው ጥረት ጋር ይጣረሳል።ሕብረቱ፤ አፍሪካን የዓለም ትልቋ ነጻ የንግድ ቀጠና የማድረግ አላማ አለው።ትራምፕ
ግን በጥምረት ከሚሠሩ ኃይሎች ጋር በጋራ ያለመደራደር አዝማሚያ ያሳያሉ ሲሉ አጥኚው ያክላሉ።የትራምፕ ተቀናቃኝ ጆ ባይደን ካሸነፉ
የአፍሪካ ፖሊሲያቸው ምን እንደሚሆን እስካሁን አልገለጹም።"የባይደን አስተዳደር በኦባማ ጊዜ ወደነበረው ሂደት ሊመለስ ይችላል"
ይላሉ ሙር። '
- አዲስ አበባ፣ ጥር 2፣ 2013(ኤፍ ቢ ሲ) የጋምቤላ ክልል ወጣት የሴራ ፖለቲካ አራማጆችን በዝምታ አይመለከቱም ሲል የክልሉ
ብልጽግና ፓርቲ ወጣቶች ሊግ ሰብሳቢ ወጣት ራች ጎች ገለጸ።የክልሉ የብልጽግና ፓርቲ ወጣቶች ሊግ የውይይት መድረክ ትናንት ተካሂዷል።ከአሁን
በፊት በነበረው የፖለቲካ ሴራ ወጣቱም ሆነ መላው የክልሉ ህዝብ ተጠቃሚ ሳይሆን ቆይቷል ያለው ሰብሳቢው ይህንን የህዝብ ጥቅም
የማያረጋግጥ የፖለቲካ ሴራ አካሄድ የክልሉ ወጣት እንደማይቀበለው ገልጿል።የክልሉ ህዝብ እኩል ተጠቃሚ የመሆን ዕድል ማግኘቱን
አስታውሶ፤ “በቀጣይ የሴራ ፖለቲካ አራማጆችን ወጣቱ በዝምታ አይመለከትም” ብሏል።የሊጉ ምክትል ሰብሳቢ ወጣት ኡጁሉ ቢሩ በበኩሉ
“ከአሁን በጎጥና በመንደር በመከፋፈል አንድነቱን ለመሸርሽር ሲሰራ ነበር” ብሏል።ህዝቡ ልዩነቶች እንዳማያስፈልጉ በመረዳቱ በክልሉ
ሰላም መረጋገጡን ጠቅሶ፤ “በቀጣይ በሚስማሙና በሚያግባቡ ጎዳዮች ዙሪያ እንሰራለን” ሲል ተናግሯል።የመድረኩ ተሳታፊ ወጣቶችም
ሀገርን ማልማትና ማሳደግ በሚያስችሉ ጉዳዮች ላይ ትኩረት ማድረግ እንደሚገባ በመግለጽ ሐሳብ አንስተዋል።ለዘንድሮ ምርጫ ሰላማዊ
ሂደትና ለተጀመረው የብልጽግና ጉዞ ስኬታማነት የበኩላቸውን አስተዋጽኦ ለማበርከት ዝግጁ መሆናቸውንም አረጋግጠዋል።ከጽንፈኝነትና
ከብሄርተኝነት አስተሳሰቦች በመውጣት መንግስት በጀመራቸው የሰላም፣ የዴምክራሲና የልማት ስራዎች በንቃት ለመሳተፍ ዝግጁ እንደሆኑ
መግለፃቸውን ኢዜአ ዘግቧል።የክልሉ ብልጽግና ፓርቲ ጽህፈት ቤት ኃላፊ አቶ ላክደር ላክባክ ፤ በሀገሪቱ እየተካሄደ ያለውን ሁለንተናዊ
ለውጥና የብልፅግና ጉዞ እውን ለማድረግ ወጣቱ ኃይል የማይተካ ሚና አለው ብለዋል።ከፌስቡክ ገፃችን በተጨማሪ ወቅታዊ፣ ትኩስ
እና የተሟሉ መረጃዎችን ለማግኘት፡-የፋና ድረ ገጽ ይጎብኙ፤ተንቀሳቃሽ ምስሎችን ለማግኘት የፋና ቴሌቪዥን የዩቲዩብ ቻናል ሰብስክራይብ
ያድርጉፈጣን መረጃዎችን ለማግኘት ትክክለኛውን የፋና ቴሌግራም ቻናል ይቀላቀሉከዚህ በተጨማሪም በትዊተር ገጻችን ይወዳጁንዘወትር
ከእኛ ጋር ስላሉ እናመሰግናለን!
- አዲስ አበባ ፣ ህዳር 1 ፣ 2013 (ኤፍ ቢ ሲ) ልዩ ኃይሉና ወጣቱ የጥፋት ቡድኑ እኩይ ዓላማ ማስፈጸሚያ መሆን የለባቸውም
ሲሉ በቁጥጥር ስር የዋሉ የጽንፈኛው ህወሓት ቡድን ታጣቂዎች ገለጹ።ከአንድ ሳምንት በፊት በትግራይ ክልል በነበረው የመከላከያ
ሰራዊት ሰሜን ዕዝ ላይ በህወሓት ቡድን የተፈጸመውን ጥቃት ተከትሎ የሃገር መከላከያ ሠራዊት በጠቅላይ ሚኒስትር ዐቢይ አሕመድ
በተሰጠው ሃገርን የማዳን ተልዕኮ ሕግ ለማስከበር የዘመቻ ሥራዎችን እያከናወነ ይገኛል።የሠራዊቱ 5ኛ ሜካናይዝድ ክፍለ ጦር የህወሓትን
ታጣቂዎች በቁጥጥር ስር አውሏል።በቁጥጥር ስር የዋሉት ታጣቂዎች የትግራይ ልዩ ኃይልን የተቀላቀሉት ኑሯቸውን አሸንፈው ለማደግ
እንጂ ከሃገር መከላከያ ሠራዊት ጋር ለመዋጋት አለመሆኑን ገልጸዋል።ኑሮን ለማሸነፍ በሚል ወደ ልዩ ኃይሉ ቢገቡም የህወሓት የጥፋት
ቡድን እኩይ ዓላማ ማስፈጸሚያ ከመሆን ውጪ ያገኙት ነገር አለመኖሩን ነው የተናገሩት።ከሃገር መከላከያ ሠራዊት ጋር መጋጨት ማለት
ከኢትዮጵያ ጋር መጋጨት መሆኑንም ገልጸዋል።የትግራይ ልዩ ኃይል እና ወጣትም የህወሓት የጥፋት ቡድን ሰላባ እንዳይሆኑ ከሃገር
መከላከያ ሠራዊቱ ጎን መቆም እንዳለባቸው ተናግረዋል።ታጣቂዎቹ በቁጥጥር ስር ከዋሉ በኋላ በሃገር መከላከያ ሠራዊቱ የደረሰባቸው
ምንም አይነት ችግር እንደሌለና በአሁኑ ወቅት በጥሩ ሁኔታ ላይ እንደሚገኙም አስረድተዋል።የሃገር መከላከያ ሠራዊት እያከናወነ
ባለው ዘመቻ የትግራይ ልዩ ኃይልና ሚሊሻ አባላት በቁጥጥር ስር እየዋሉ መሆኑን ኢዜአ ዘግቧል።
datasets:
- Desalegnn/amharic-passage-retrieval-dataset
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: RoBERTa Amharic Text Embedding Medium
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.6580183404160144
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7957951241333036
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8378438828002684
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.881458286736748
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6580183404160144
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26526504137776785
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16756877656005367
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08814582867367479
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6580183404160144
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7957951241333036
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8378438828002684
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.881458286736748
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7709474570309212
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7353928136527119
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7393628003261186
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.6486244687989264
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7924401699843435
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8322522925520018
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8785506598076493
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6486244687989264
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2641467233281145
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16645045851040036
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08785506598076492
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6486244687989264
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7924401699843435
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8322522925520018
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8785506598076493
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7648909692248538
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7283518477099342
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7323704818675834
name: Cosine Map@100
---
# RoBERTa Amharic Text Embedding Medium
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [rasyosef/roberta-medium-amharic](https://huggingface.co/rasyosef/roberta-medium-amharic) on the [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/Desalegnn/amharic-passage-retrieval-dataset) dataset. It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [rasyosef/roberta-medium-amharic](https://huggingface.co/rasyosef/roberta-medium-amharic) <!-- at revision 9d02d0281e64d6ca31bd06d322e14b0b7e60375b -->
- **Maximum Sequence Length:** 510 tokens
- **Output Dimensionality:** 512 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/Desalegnn/amharic-passage-retrieval-dataset)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 510, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Desalegnn/Desu-roberta-amharic-embed-medium-45k")
# Run inference
sentences = [
'በቁጥጥር ስር የዋሉ የህወሓት ታጣቂዎች ልዩ ኃይሉና ወጣቱ የጥፋት ቡድኑ እኩይ ዓላማ ማስፈጸሚያ ከመሆን እንዲቆጠቡ አስገነዘቡ',
'አዲስ አበባ ፣ ህዳር 1 ፣ 2013 (ኤፍ ቢ ሲ) ልዩ ኃይሉና ወጣቱ የጥፋት ቡድኑ እኩይ ዓላማ ማስፈጸሚያ መሆን የለባቸውም ሲሉ በቁጥጥር ስር የዋሉ የጽንፈኛው ህወሓት ቡድን ታጣቂዎች ገለጹ።ከአንድ ሳምንት በፊት በትግራይ ክልል በነበረው የመከላከያ ሰራዊት ሰሜን ዕዝ ላይ በህወሓት ቡድን የተፈጸመውን ጥቃት ተከትሎ የሃገር መከላከያ ሠራዊት በጠቅላይ ሚኒስትር ዐቢይ አሕመድ በተሰጠው ሃገርን የማዳን ተልዕኮ ሕግ ለማስከበር የዘመቻ ሥራዎችን እያከናወነ ይገኛል።የሠራዊቱ 5ኛ ሜካናይዝድ ክፍለ ጦር የህወሓትን ታጣቂዎች በቁጥጥር ስር አውሏል።በቁጥጥር ስር የዋሉት ታጣቂዎች የትግራይ ልዩ ኃይልን የተቀላቀሉት ኑሯቸውን አሸንፈው ለማደግ እንጂ ከሃገር መከላከያ ሠራዊት ጋር ለመዋጋት አለመሆኑን ገልጸዋል።ኑሮን ለማሸነፍ በሚል ወደ ልዩ ኃይሉ ቢገቡም የህወሓት የጥፋት ቡድን እኩይ ዓላማ ማስፈጸሚያ ከመሆን ውጪ ያገኙት ነገር አለመኖሩን ነው የተናገሩት።ከሃገር መከላከያ ሠራዊት ጋር መጋጨት ማለት ከኢትዮጵያ ጋር መጋጨት መሆኑንም ገልጸዋል።የትግራይ ልዩ ኃይል እና ወጣትም የህወሓት የጥፋት ቡድን ሰላባ እንዳይሆኑ ከሃገር መከላከያ ሠራዊቱ ጎን መቆም እንዳለባቸው ተናግረዋል።ታጣቂዎቹ በቁጥጥር ስር ከዋሉ በኋላ በሃገር መከላከያ ሠራዊቱ የደረሰባቸው ምንም አይነት ችግር እንደሌለና በአሁኑ ወቅት በጥሩ ሁኔታ ላይ እንደሚገኙም አስረድተዋል።የሃገር መከላከያ ሠራዊት እያከናወነ ባለው ዘመቻ የትግራይ ልዩ ኃይልና ሚሊሻ አባላት በቁጥጥር ስር እየዋሉ መሆኑን ኢዜአ ዘግቧል።',
'የፕሬዚዳንት ዶናልድ ትራምፕ ተቺዎች እንደሚሉት፤ ፕሬዚዳንቱ ለዘመናት የአሜሪካ ወዳጆች በሆኑት ኢትዮጵያ እና ግብፅ መካከል ታላቁ የሕዳሴ ግድብን በተመለከተ ውጥረት ቀስቅሰዋል።ይህም በአሜሪካ እና በአፍሪካ የዲፕሎማሲ ታሪክ ትልቁ የትራምፕ ስህተት ነው ይላሉ።ትራምፕ ከቀናት በፊት ግብፅ "ግድቡን ልታፈነዳው ትችላለች" ማለታቸው ይታወሳል። ጥር ላይ ፕሬዚዳንቱ "ስምምነት መፍጠር ችያለሁ፤ ከባድ ጦርነትም አስቁሜያለሁ" ብለው የኖቤል የሰላም ሽልማት እንደሚገባቸው መናገራቸው ይታወሳል።ነገር ግን ተሸላሚ የሆኑት ጠቅላይ ሚንስትር ዐብይ አሕመድ ነበሩ ።ትራምፕ የኖቤል የሰላም ሽልማት እንደሚገባቸው ሲናገሩ ጉዳዩን ግልፅ ባያደርጉትም፤ በግብፁ ፕሬዘዳንት አብዱልፈታህ አል-ሲሲ ጥሪ መሠረት በኢትዮጵያ እና በግብፅ መካከል ጣልቃ ስለመግባታቸው እየተናገሩ እንደነበረ ይታመናል።ትራምፕ በአንድ ወቅት አብዱልፈታህ አል-ሲሲን "የኔ ምርጡ አምባገነን" ማለታቸው አይዘነጋም።ግብፅ ታላቁ ሕዳሴ ግድብ "ለደህንነቴ ያሰጋኛል" ትላለች። ሱዳንም የግብፅን ያህል ባይሆንም ስጋቱን ትጋራለች። በሌላ በኩል ኢትዮጵያ የኃይል አመንጪውን ግድብ አስፈላጊነት አስረግጣ ትገልጻለች።ኬንያ የሚገኘው የአፍሪካ ቀንድ የጸጥታ ጉዳይ ተንታኝ ረሺድ አብዲ እንደሚለው፤ በግድቡ ዙሪያ ኢትዮጵያ እና ግብፅን ለማደራደር አሜሪካ ጣልቃ መግባቷ የሁለቱን አገሮች ውጥረት አባብሷል።"ኢትዮጵያ በግድቡ አቅራቢያ የጸጥታ ኃይሏን እያጠናከረች ነው። ቤንሻንጉል ጉሙዝ ክልልን ከበረራ ውጪ ማድረጓ አንዱ ማሳያ ነው። በግድቡ ዙሪያ በረራ የሚያግድ መሣሪያም ተገጥሟል። ግብፅ የወታደራዊ ቅኝት በረራ ልታደርግ እንደምትችል ከመስጋት የመነጨ ሊሆን ይችላል" ይላል።ተንታኙ እንደሚናገረው፤ ትራምፕ ዓለም አቀፍ ዲፕሎማሲ እንዴት እንደሚሠራ የሚገነዘቡ አይመስልም።"በንግዱ ዓለም እንደሚደረገው ስምምነት ላይ መድረስ ይቻላል የሚል የተዛባ አመለካከት አላቸው። የውጪ ጉዳይ መያዝ ያለበትን ጉዳይ ግምዣ ቤት ድርድሩን እንዲመራ ያደረጉትም ለዚህ ነው። ከመነሻውም መጥፎ የነበረውን ሁኔታም አባብሶታል" ሲልም ረሺድ ያስረዳል።ኢትዮጵያ ከግብፅ እና ከሱዳን ጋር ያለው ድርድር ሳይቋጭ ግድቡን ለመሙላት በመወሰኗ አሜሪካ የ100 ሚሊዮን ዶላር እርዳታ ማጠፏ ተዘግቧል።ረሺድ "ኢትዮጵያ አሜሪካ እንደከዳቻት ይሰማታል። ብዙ ኢትዮጵያውያን ትራምፕን የጥላቻ ምልክት አድርገውታል" በማለት ሁኔታውን ይገልጻል።የዴሞክራት እጩው ጆ ባይደን እንዲያሸንፉም የበርካታ ኢትዮጵያውያን ምኞት ነው።አሜሪካ የሚገኘው ሴንተር ፎር ግሎባል ዴቨሎፕመንት ውስጥ የፖሊሲ አጥኚ ደብሊው ጉዬ ሙር እንደሚሉት፤ የትራምፕ አስተዳደር እስራኤልና የአረብ ሊግ አገራት መካከል ሰላም መፍጠር ስለሚፈልግ ከግብፅ ጎን መቆሙ የሚጠበቅ ነው።ግብፅ ከእስራኤል ጋር ዘመናት ያስቆጠረ ዲፕሎማሲያዊ ትስስር አላት። ትራምፕ የአረብ ሊግ አገራት ለእስራኤል እውቅና እንዲሰጡ ጥረት እያደረጉ ስለሆነ አብዱልፈታህ አል-ሲሲን ማስቀየም አይፈልጉም።ሙር እንደሚናገሩት፤ የትራምፕ አስተዳደር በግድቡ ዙርያ ለግብፅ የወገነውም በዚህ ምክንያት ነው።ትራምፕ ሱዳንን በተመለከተ የደረሱበት ውሳኔ የአረቡን አገራት ከእስራኤል ጋር ለማስስማት የሚያደርጉት ጥረት አንድ አካል ነው።ሱዳን ከእስራኤል ጋር ስምምነት ለማድረግ ወስናለች።በእርግጥ የአገሪቱ ተጠባባቂ የውጪ ጉዳይ ሚንስትር ውሳኔው ገና በሕግ አውጪ መጽደቅ እንዳለበት ቢናገሩም፤ ሱዳን እንደ ጎርጎሮሳውያኑ 1967 ላይ የአረብ ሊግ አገራት ውይይት ማስተናገዷ መዘንጋት የለበትም። በውይይቱ "ከእስራኤል ጋር መቼም ሰላም አይፈጠርም። መቼም ቢሆን ለእስራኤል እውቅና አይሰጥም። ድርድርም አይካሄድም" ተብሎም ነበር።ሱዳን ከእስራኤል ጋር ለመስማማት በመፍቀዷ ትራምፕ ሽብርን ከሚድፉ አገሮች ዝርዝር እንደሚያስወጧት ተናግረዋል። ይህም ለምጣኔ ሀብቷ ማገገም የሚረዳ ድጋፍ እንድታገኝ ያግዛታል።ትራምፕ በድጋሚ ከተመረጡ ኢትዮጵያ ግድቡን በተመለከተ ሱዳን እና ግብፅ ላላቸው ስጋት አንዳች መልስ እንድትሰጥ ጫና እንደሚያደርጉ ይጠበቃል።አጥኚው እንደሚሉት፤ ሱዳን ሽብርን ከሚደግፉ አገሮች ዝርዝር ከወጣች የትራምፕ አስተዳደር በምላሹ የሚጠብቀው ነገር አለ።"ከእስራኤል ጋር ስምምነት የመፍጠር ጉዳይ የሱዳን ማኅበረሰብን የከፋፈለ ነው። መንግሥት የራሱ የጸጥታ ጥያቄዎች እያሉበት ይህን ውሳኔ ማሳለፉ ችግር ሊያስከትል ይችላል" ብለዋል። ትራምፕ አፍሪካን በተመለከተ የሚያራምዱት ፖሊሲ፤ በአሜሪካ እና በቻይና መካከል የሚካሄድ \'አዲሱ ቀዝቃዛ ጦርነት\' ነው ሲል ረሺድ ይገልጸዋል።ለምሳሌ ቻይና ከግዛቷ ውጪ የመጀመሪያውን ወታደራዊ መቀመጫ የከፈተችው በጅቡቲ ነው። ማዕከሉ የሚገኘው አሜሪካ የሶማሊያ ታጣቂዎች ላይ የአየር ጥቃት ለመሰንዘር ያቋቋመችው ማዕከል አቅራቢያ ነው።በቅርቡ የአሜሪካ ተዋጊ ጀቶች ለማረፍ ሲሞክሩ፤ ቻይና የአሜሪካውያን ወታደሮችን እይታ የሚጋርድ መሣሪያ መሞከሯን ረሺድ ያጣቅሳል። "የትራምፕ አስተዳደር ጸረ ቻይና ፖሊስ ያራምዳል" የሚለው ተንታኙ ሁኔታው ለአፍሪካ ቀንድ አስቸጋሪ መሆኑንም ያስረዳል።ቻይና አፍሪካ ውስጥ ያላትን የንግድ የበላይነት ለመቀልበስ፤ የትራምፕ አስተዳደር \'ፕሮስፔሪቲ አፍሪካ ኢን 2018\' የተባለ ፖሊሲ ነድፏል።በአፍሪካ እና በአሜሪካ መካከል የሚካሄደውን ንግድ በእጥፍ የማሳደግ እቅድ አለ። አምና የአሜሪካ መንግሥት የንግድ ተቋሞች አፍሪካ ውስጥ እንዲሠሩ የገንዘብ ድጋፍ የሚሰጥበት አሠራር ዘርግቷል።ሙር እንደሚሉት፤ የአሜሪካ ድርጅቶች ከቻይና ተቋሞች ጋር መወዳደር አልቻልንም ብለው ቅሬታ ስላሰሙ የገንዘብ ድጋፍ ለመስጠት ተወስኗል። "የአይቲ ዘርፍ እንደ ማሳያ ቢወሰድ፤ 70 በመቶ የአፍሪካ ኢንፎርሜሽን ቴክኖሎጂ የተመሠረተው በቻይና ድርጅቶች ላይ ነው" ሲሉ ያብራራሉ። የትራምፕ አስተዳደር በ2025 የሚያበቃውን ከ30 በላይ የአፍሪካ አገሮች ተጠቃሚ እንዲሆኑበት ታስቦ በአሜሪካ ለአፍሪካውያን የተሰጠው ከታሪፍና ከቀረጥ ነፃ የገበያ ዕድል (አፍሪካ ግሮዝ ኤንድ ኦፖርቹኒቲ አክት-አጎዋ) የመሰረዝ እቅድ አለው። ለአፍሪካ ምርቶች የአሜሪካን ገበያ ክፍት የሚያደርገው ስምምነት የተፈረመው በቢል ክሊንተን ነበር።አሜሪካ አሁን ላይ ትኩረቷ የሁለትዮሽ የንግድ ስምምነት እንደሆነ ሙር ይናገራሉ። ለምሳሌ ከኬንያ ጋር ንግግር እየተካሄደ ነው።ኬንያ፤ የቻይና \'ቤልት ኤንድ ሮድ ኢኒሽየቲቭ\' አካል እንደሆነች ይታወቃል። ስምምነቱ ቻይናን ከአፍሪካ ጋር በንግድ የሚያስተሳስርና የቻይና ዓለም አቀፍ ተደማጭነት የሚያጎላ እንደሆነ አሜሪካ ታምናለች።ትራምፕ ከኬንያ ጋር በቀጥታ ከተስማሙ በኋላ ተመሳሳይ መንገድ ተጠቅመው ከሌሎች የአፍሪካ አገሮች ጋር የመሥራት ውጥን እንዳላቸው ሙር ይናገራሉ።ይህ የትራምፕ መንገድ፤ ከአፍሪካ ሕብረት የንድግና ኢንዱስትሪ ኮሚሽነር አልበርት ሙቻንጋን ሐሳብ ጋር ይጣረሳል።እሳቸው የአፍሪካ አገራት በተናጠል ሳይሆን በአንድነት ከአሜሪካ ጋር ስምምነት እንዲያደርጉ ይፈልጋሉ። ሙር እንደሚሉት፤ የአሜሪካ ውሳኔ የአፍሪካ ሕብረት የአህጉሪቱን ምጣኔ ሀብት ለማጣመር ከሚያደርገው ጥረት ጋር ይጣረሳል።ሕብረቱ፤ አፍሪካን የዓለም ትልቋ ነጻ የንግድ ቀጠና የማድረግ አላማ አለው።ትራምፕ ግን በጥምረት ከሚሠሩ ኃይሎች ጋር በጋራ ያለመደራደር አዝማሚያ ያሳያሉ ሲሉ አጥኚው ያክላሉ።የትራምፕ ተቀናቃኝ ጆ ባይደን ካሸነፉ የአፍሪካ ፖሊሲያቸው ምን እንደሚሆን እስካሁን አልገለጹም።"የባይደን አስተዳደር በኦባማ ጊዜ ወደነበረው ሂደት ሊመለስ ይችላል" ይላሉ ሙር። ',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 512]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.7371, 0.0595],
# [0.7371, 1.0000, 0.1438],
# [0.0595, 0.1438, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 512
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.658 |
| cosine_accuracy@3 | 0.7958 |
| cosine_accuracy@5 | 0.8378 |
| cosine_accuracy@10 | 0.8815 |
| cosine_precision@1 | 0.658 |
| cosine_precision@3 | 0.2653 |
| cosine_precision@5 | 0.1676 |
| cosine_precision@10 | 0.0881 |
| cosine_recall@1 | 0.658 |
| cosine_recall@3 | 0.7958 |
| cosine_recall@5 | 0.8378 |
| cosine_recall@10 | 0.8815 |
| **cosine_ndcg@10** | **0.7709** |
| cosine_mrr@10 | 0.7354 |
| cosine_map@100 | 0.7394 |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6486 |
| cosine_accuracy@3 | 0.7924 |
| cosine_accuracy@5 | 0.8323 |
| cosine_accuracy@10 | 0.8786 |
| cosine_precision@1 | 0.6486 |
| cosine_precision@3 | 0.2641 |
| cosine_precision@5 | 0.1665 |
| cosine_precision@10 | 0.0879 |
| cosine_recall@1 | 0.6486 |
| cosine_recall@3 | 0.7924 |
| cosine_recall@5 | 0.8323 |
| cosine_recall@10 | 0.8786 |
| **cosine_ndcg@10** | **0.7649** |
| cosine_mrr@10 | 0.7284 |
| cosine_map@100 | 0.7324 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### amharic-passage-retrieval-dataset
* Dataset: [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/Desalegnn/amharic-passage-retrieval-dataset) at [e7be243](https://huggingface.co/datasets/Desalegnn/amharic-passage-retrieval-dataset/tree/e7be2430fc785999074dee8dbac1c3e466449442)
* Size: 40,237 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 14.69 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 293.39 tokens</li><li>max: 510 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>ሚንስትር ዴኤታ ወይዘሮ አለም-ፀሀይ የአርባ ምንጭ ሆስፒታልና የኮቪድ-19 ሕክምና ማዕከልን ጎበኙ</code> | <code>አዲስ አበባ፣ መስከረም 13፣ 2013 (ኤፍ.ቢ.ሲ) የጤና ሚኒስቴር ሚንስትር ዴኤታ ወይዘሮ አለምፀሀይ ጳውሎስ በደቡብ ክልል ጋሞ ዞን የአርባ ምንጭ ከተማ ሆስፒታል እና ጤና ጣቢያ ጎብኙ፡፡እንዲሁም በኮቪድ-19 የህክምና ማዕከል ተገኝተው ያለውን የስራ እንቅስቃሴ መመልከታቸውም ተገልጸል፡፡ሚኒስትር ዴኤታዋ በጉብኝቱ ወቅት የህክምና ተቋማቱ ለአካባቢ ነዋሪዎች እየሰጡ ያለውን ዘርፈ ብዙ አገልግሎት እና ለኮቪድ 19 ወረርሽኝ የመከላከልና የመቆጣጠር ምላሽ አሠጣጥ የሚበረታታና ውጤታማ እንደሆነ ተናግረዋል፡፡በዚህም ለማዕከሉ ሰራተኞች ምስጋናቸውን አቅርበዋል፡፡የተቋማቱ ስራ ኃላፊዎችም ከሚኒስትር ዴኤታዋ ጋር መወያየታቸው ተሰምቷል፡፡ኃላፊዎቹ አገልግሎታቸውን በተሟላ መንገድ ለመስራት አያስችሉንም ያሏቸውን ጉድለቶች አንስተው ውይይት አድረገውባቸዋል፡፡የህክምና ተቋማቱ ያሉበት የስራ አፈጻጸም የሚበረታታ ቢሆንም ለተሻለ ስራ መነሳትና የጤና አገልግሎቱን ይበልጥ ማሻሻል ያስፈልጋል ሲሉ ሚኒስትር ዴኤታዋ ማሳሰባቸውን ከሚኒስቴሩ ያገኘነው መረጃ ያመለክታል፡፡</code> |
| <code>መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠየቁ</code> | <code>መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠይቀዋል፡፡የሰላም ሚኒስቴር ከሳይንስና ከፍተኛ ትምህርት ሚኒስቴርና የኢትዮጵያ መምህራን ማህበር ጋር በመተባበር ያዘጋጁት ሀገር አቀፍ መምህራን የሰላም ውይይት መድረክ በአዲስ አበባ እየተካሄደ ነው፡፡በዚህ የውይይት መድረክ ላይ የሰላም ሚኒስትሯ ወይዘሮ ሙፈሪያት ካሚልን ጨምሮ ሌሎች ባለድርሻ አካላት ተገኝተዋል፡፡ውይይቱ “ሰላምና ሀገር ወዳድ መምህራኖች ፤ ሰላምና ሀገር ወዳድ ተማሪዎችን ያፈራሉ” በሚል መሪ ቃል እየተካሄደ የሚገኝ ሲሆን መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠይቀዋል፡፡በውይይቱ ንግግር ያደረጉት የሰላም ሚኒስትር ወይዘሮ ሙፈሪያት ካሚል መምህራን ትውልድን መቅረጽ ካላቸው እድል አንፃር ሰላምን በመስበክ በኩል ከፍተኛ አስተዋጽኦ ሊያበርክቱ ይገባል ብለዋል፡፡ሀገራዊ ግንባታ ትምህርትና የተሟላ ስብዕና የሚጠይቅ በመሆኑም ለማህበረሰብ ስብዕናና የበለጸገ ትውልድን በመፍጠር ረገድ የመምህራን ሚና ክፍተኛ መሆኑንም ተናግረዋል።ትምህርት ቤቶች የሰላም ማዕድ ይሆኑ ዘንድም መምህራን እያከናዎኑት ያለውን ትውልድን የመቅረጽ ተግባር አጠናክረው መቀጠል እንዳለባቸውም ወይዘሮ ሙፈሪያት አሳስበዋል፡፡ በውይይቱ ላይ አስተያየት የሰጡት መምህራን በበኩላቸው ሰላም ሁሉንም የሚመለከት ጉዳይ በመሆኑ ሰላምን በመስበክና በማረጋገጥ ረገድ ከመንግስት ጋር በመሆን የሚጠበቅባቸውን ኃላፊነት እንደሚወጡ ገልጸዋል፡፡በተለይም የስነ ዜጋ፣ ስነ ምግባርና የታሪክ ትምህርት መምህራን ለተማሪዎች በሚያቀርቡት ትምህርት ላይ ሚዛናዊና ኃላፊነት በተሞላበት መንገድ ማቅረብ እንዳለባቸውም ጠቁመዋል፡፡ መምህሩ በስነ ምግባር አርዓያ በመሆን ሰላምና ግብ...</code> |
| <code>የኢትዮጵያ እና ማሊ ከ17 አመት በታች ብሄራዊ ቡድኖች ጨዋታ እሁድ ይካሄዳል</code> | <code>በአዲስ አበባ ስታድየም እየተዘጋጀ የሚገኘው ብሄራዊ ቡድኑ በዛሬው የልምምድ መርሃ ግብር በእሁዱ ጨዋታ ላይ ቋሚ ተሰላፊዎች ይሆናሉ ተብለው የሚገመቱትን በመለየት የቅንጅትና ከርቀት አክርሮ የመምታት ልምምዶችን አከናውኗል፡፡ባለፉት ሶስት ቀናት በመጠነኛ ጉዳት በልምምድ ወቅት አቋርጠው ሲወጡ የነበሩት ሳሙኤል ተስፋዬ እና አቡበከር ነስሩ በዛሬው ልምምድ ከቡድኑ ጋር ሙሉ ልምምድ የሰሩ ሲሆን ሁሉም ተጨዋቾች በሙሉ ጤንነት ላይ ይገኛሉ፡፡ከ17 አመት ቡድናችን እሁድ ዕለት ከአፍሮ ፅዮን ጋር ባደረጉት የአቋም መፈተሻ ጨዋታ ላይ ከአፍሮፅዮን በኩል መልካም እንቅስቃሴ ያሳዩ 6 ተጨዋቾች ጥሪ ቀርቦላቸው በዛሬው ልምምድ ላይ ተገኝተው ከቡድኑ ጋር ልምምድ ያደረጉ ቢሆንም አሳማኝ እንቅስቃሴ ባለማሳየታቸው እንዲመለሱ ተደርጓል፡፡ቀይ ቀበሮዎቹ በእሁዱ ጨዋታ በባማኮ የደረሰባቸውን የ2-0 ሽንፈት ቀልብሰው ወደ ማዳጋስካር የአፍሪካ ከ17 አመት በታች ዋንጫ ለማምራት በከፍተኛ ተነሳሽነት እና ፍላጎት ዝግጅታቸውን በማከናወን ላይ እንደሚገኙ ለመታዘብ ችለናል፡፡በኢትዮጵያ እና ማሊ መካከል የሚደረገው ጨዋታ እሁድ መስከረም 22 ቀን 2009 በአዲስ አበባ ስታድየም 10:00 ላይ የሚካሄድ ሲሆን ጨዋታው የሚካሄድበት የአዲስ አበባ ስታድየም ሜዳን ምቹ ለማድረግ የሚያስችሉ ስራዎች እየተከናወኑ ይገኛሉ፡፡የእሁዱ ተጋጣሚያችን የማሊ ከ17 አመት በታች ብሄራዊ ቡድን አርብ አዲስ አበባ ይገባል፡፡ ጨዋታውን የሚመሩት አራቱም ዳኞች ከኒጀር ፤ ኮሚሽነሩ ደግሞ ከዩጋንዳ እንደተመደቡም ታውቋል፡፡</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
512,
256
],
"matryoshka_weights": [
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 |
|:-------:|:--------:|:-------------:|:----------------------:|:----------------------:|
| -1 | -1 | - | 0.0817 | 0.0650 |
| 1.0 | 315 | 0.9415 | 0.7090 | 0.6974 |
| 2.0 | 630 | 0.221 | 0.7502 | 0.7428 |
| 3.0 | 945 | 0.1085 | 0.7570 | 0.7502 |
| 4.0 | 1260 | 0.0701 | 0.7678 | 0.7626 |
| **5.0** | **1575** | **0.0548** | **0.7709** | **0.7649** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.56.2
- PyTorch: 2.8.0+cu126
- Accelerate: 1.10.1
- Datasets: 4.1.1
- Tokenizers: 0.22.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758533071
|
poolkiltzn
| 2025-09-22T09:25:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T09:25:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RMCian/Smoothie-Qwen3-1.7B-Gensyn-Swarm-lazy_energetic_badger
|
RMCian
| 2025-09-22T09:25:15Z | 137 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am lazy_energetic_badger",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T03:23:50Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am lazy_energetic_badger
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sweetdream0530/bfcl-submission
|
sweetdream0530
| 2025-09-22T09:20:56Z | 0 | 0 | null |
[
"bfcl",
"function-calling",
"dialogpt",
"berkeley-function-calling-leaderboard",
"base_model:microsoft/DialoGPT-medium",
"base_model:finetune:microsoft/DialoGPT-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T21:26:47Z |
---
license: apache-2.0
base_model: microsoft/DialoGPT-medium
tags:
- bfcl
- function-calling
- dialogpt
- berkeley-function-calling-leaderboard
---
# BFCL Submission Model
This model is a submission for the Berkeley Function-Calling Leaderboard (BFCL), designed to evaluate LLM function-calling capabilities.
## Model Details
- **Model Type**: Causal Language Model
- **Base Model**: microsoft/DialoGPT-medium
- **Mode**: fc (native function-calling)
- **Parameter Count**: ~345M parameters
- **License**: Apache 2.0
## Function Calling Capabilities
The model can execute the following functions:
1. **web_search**: Search the web for information
2. **get_weather**: Get current weather information
3. **calculate**: Perform mathematical calculations
4. **store_memory**: Store information in memory
5. **retrieve_memory**: Retrieve information from memory
## Usage
The model is designed to be used with the BFCL evaluation framework. The main entry point is the `process_message` function in `handler.py`.
```python
from handler import process_message
# Process a message
result = process_message("Hello, how are you?")
print(result)
```
## Evaluation
This model will be automatically evaluated by the BFCL team using their pinned evaluator version.
## Repository Information
- **GitHub**: https://github.com/sweetdream0530/bfcl_submission
- **HuggingFace**: https://huggingface.co/sweetdream0530/bfcl-submission
- **BFCL Submission**: Use this HuggingFace URL for BFCL evaluation
## License
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|
eth-nlped/MathDial-SFT-Qwen2.5-1.5B-Instruct
|
eth-nlped
| 2025-09-22T09:20:05Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"MathDial",
"TutorLLM",
"conversational",
"en",
"dataset:eth-nlped/mathdial",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T11:21:27Z |
---
datasets:
- eth-nlped/mathdial
language:
- en
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
tags:
- MathDial
- TutorLLM
library_name: transformers
pipeline_tag: text-generation
---
## Overview
This model is a **supervised fine-tuned (SFT) language model** trained on the **[MathDial dataset](https://huggingface.co/datasets/eth-nlped/mathdial-chat/viewer/default/train?views%5B%5D=train&row=0)**. MathDial is a dataset of conversational math word problems, where a tutor guides a student through solving step by step.
The model is optimized for:
- Conversational math problem solving
- Step-by-step reasoning in dialogue form
- Scaffolding
Repository: **[Github code for SFT Fine-tuning on MathDial](https://github.com/eth-nlped/mathdial/tree/main/SFT_Finetuning)**
---
## Training Details
- **Base model:** *[Qwen/Qwen2.5-1.5B-Instruct]*
- **Fine-tuning method:** Supervised fine-tuning (SFT)
- **Training framework:** *[Hugging Face `transformers` + `trl`]*
- **Epochs:** *[3]*
- **Batch size:** *[8]*
- **Learning rate:** *[6.25e-5]*
Training input and output:
The model was fine-tuned on the **[MathDial dataset](https://huggingface.co/datasets/eth-nlped/mathdial-chat/viewer/default/train?views%5B%5D=train&row=0)**.
Each training example consisted of a **Instruction**, **Student's Name**, **Math Word Problem and Solution** and **The students initial approach** as input, followed by the **tutor’s step-by-step solution** as the target output.
To incorporate the whole conversation, a sliding window approach was used. Every input has the same format:
For each step in a conversation, the model input included **all previous turns** in the dialogue (sliding window), followed by the student’s next message. The model’s output was then the **next tutor response** from the dataset.
This approach ensures the model learns to generate responses that are context-aware.
---
## Intended Use
This model is intended for use in:
- Interactive math tutoring
- Research in dialogue-based problem solving
- Educational tools
---
## Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "eth-nlped/MathDial-SFT-Qwen2.5-1.5B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# The model was trained with conversations that include:
# The System prompt with the student's name (in this example "Mariana"), A math word problem with the correct solution and the student's incorrect solution.
# Then the Tutor (assistant) asks the student (user) to explain their solution
# Followed by the student's explanation
# The conversation can be extended by adding another tutor response and the student's next message.
# For more conversations, check out the MathDial dataset, linked above
messages = [
{"content": "You are a friendly and supportive teacher.\nThe student, with the name Mariana, is trying to solve the following problem: Julia was preparing for a dinner party at her house, where she intended to serve stew. She noticed that she was out of plastic spoons, so she bought a new package of spoons. Later, her husband also bought a package of 5 new spoons and gave them to Julia. While Julia was making the stew, she used three of the spoons to sample her stew. Later, when she went to set the table, she had a total of 12 spoons. How many spoons were in the package that Julia bought?.\n\nThe correct solution is as follows:\nThe total number of spoons from Julia and her husband was 12+3=15 spoons.\nSince the husband bought a package of five spoons, then Julia's package contained 15-5=10 spoons.\n 10\n","role": "system",},
{"content": "Let's call the number of spoons Julia bought \"x\". \nHer husband bought 5 more spoons, so the total number of spoons is now x + 5. \nJulia used 3 spoons to sample her stew, so she had 12 - 3 = 9 spoons left. \nWe know that the total number of spoons is x + 5, so we can set up an equation: \n\nx + 5 = 9 \n\nSubtracting 5 from both sides: \n\nx = 4 \n\nSo Julia bought a package of 4 spoons. \n 4","role": "user",},
{"content": "Hi Mariana, please talk me through your solution","role": "assistant",},
{"content": "Sure. I started by letting x be the number of spoons Julia bought. Then I added 5 to x to get the total number of spoons. Next, I subtracted 3 from the total number of spoons to get the number of spoons left. Finally, I set up an equation and solved for x, which was 4. So Julia bought a package of 4 spoons.","role": "user",},
]
#apply chat template
chat_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(chat_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
#Example output: excellent start. lets work from the top. if we know she has 12 spoons left, and already used 3. how many did she start with?
```
---
## Citation
|
TurkuNLP/bge-embeddings-subregister-classification
|
TurkuNLP
| 2025-09-22T09:18:22Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T09:18:22Z |
---
license: apache-2.0
---
|
TiMOld/Qwen3-0.6B-Gensyn-Swarm-twitchy_foxy_ram
|
TiMOld
| 2025-09-22T09:12:51Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am twitchy_foxy_ram",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T11:22:42Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am twitchy_foxy_ram
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sabersamax/value-model-1.5b
|
sabersamax
| 2025-09-22T09:11:44Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-21T14:06:18Z |
# Value Model (Base + Value Head)
- Base: Qwen/Qwen2.5-Math-1.5B-Instruct
- This folder contains base model weights (safetensors shards) and an extra `value_head.safetensors`.
## Quick inference (Python)
```python
import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from safetensors.torch import load_file
model_dir = "/home/huimin/New-Proj/value_model-1.5b/hf_converted_model"
base = AutoModelForCausalLM.from_pretrained(model_dir, torch_dtype=torch.bfloat16)
value_head = torch.nn.Linear(base.config.hidden_size, 1, bias=False)
state = load_file(os.path.join(model_dir, "value_head.safetensors"))
value_head.load_state_dict({"weight": state["value_head.weight"]})
tok = AutoTokenizer.from_pretrained(model_dir)
inputs = tok("Hello", return_tensors="pt")
outputs = base(**inputs, output_hidden_states=True)
last = outputs.hidden_states[-1]
values = value_head(last).squeeze(-1)
print(values.shape)
```
|
geogle/my_awesome_eli5_clm-model
|
geogle
| 2025-09-22T09:11:33Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T08:46:25Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilgpt2
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9074 | 1.0 | 1324 | 3.8478 |
| 3.8165 | 2.0 | 2648 | 3.8380 |
| 3.7742 | 3.0 | 3972 | 3.8359 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
tomal66/gemma3-1b-emotion-fpt-sft
|
tomal66
| 2025-09-22T09:11:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T09:11:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
y1y2y3/smolvla_base2_migrated
|
y1y2y3
| 2025-09-22T09:07:52Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:unknown",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-22T09:05:52Z |
---
base_model: lerobot/smolvla_base
datasets: unknown
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- lerobot
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
CATIE-AQ/Idefics3_FT_fr
|
CATIE-AQ
| 2025-09-22T09:07:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"multimodal",
"vision",
"image-text-to-text",
"fr",
"arxiv:2408.12637",
"base_model:HuggingFaceM4/Idefics3-8B-Llama3",
"base_model:finetune:HuggingFaceM4/Idefics3-8B-Llama3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-12-16T08:23:23Z |
---
license: apache-2.0
language:
- fr
tags:
- multimodal
- vision
- image-text-to-text
library_name: transformers
base_model:
- HuggingFaceM4/Idefics3-8B-Llama3
---
# Idefics3_FT_fr
[Idefics3-8B-Llama3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3) trained with the data in the following [collection](https://huggingface.co/collections/CATIE-AQ/french-vqa-datasets-678a607a4c08258a5212950b) with this [script](https://github.com/catie-aq/multimodal_RAG_with_VLMs/blob/main/idefics_FT.py).
# Citation
```bibtex
@misc{laurençon2024building,
title={Building and better understanding vision-language models: insights and future directions.},
author={Hugo Laurençon and Andrés Marafioti and Victor Sanh and Léo Tronchon},
year={2024},
eprint={2408.12637},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
Mavdol/NPC-Valence-Arousal-Prediction-ONNX
|
Mavdol
| 2025-09-22T09:04:44Z | 8 | 0 | null |
[
"onnx",
"distilbert",
"en",
"dataset:Mavdol/NPC-Valence-Arousal",
"base_model:distilbert/distilbert-base-uncased",
"base_model:quantized:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2025-09-20T01:20:33Z |
---
license: apache-2.0
datasets:
- Mavdol/NPC-Valence-Arousal
language:
- en
base_model:
- distilbert/distilbert-base-uncased
---
# NPC Valence Arousal Prediction
ONNX version of : [Mavdol/NPC-Valence-Arousal-Prediction](https://huggingface.co/Mavdol/NPC-Valence-Arousal-Prediction)
## What is it?
This model predicts emotional states in video game NPC dialogues using Russell's Circumplex Model of Affect. Trained on dialogue datasets from popular RPGs including Skyrim, Baldur's Gate, Cyberpunk 2077, and others, it analyzes text input and outputs two continuous emotional dimensions:
- **Valence**: How positive or negative the emotion feels (ranging from unpleasant to pleasant)
- **Arousal**: How intense or calm the emotion is (ranging from low energy to high energy)
**Objective**: Enable more nuanced emotional understanding in gaming contexts by moving beyond discrete emotion categories to a continuous 2D emotional space that better captures the complexity of NPC emotional expressions.
## Russell's Circumplex Model - Core Principles
The Circumplex Model represents emotions as points in a circular 2D space defined by two independent axes:
- **Valence (X-axis)**: Pleasant ↔ Unpleasant
- **Positive values** = positive emotions (joy, contentment)
- **Negative values** = negative emotions (sadness, anger)
- **Arousal (Y-axis)**: High Activation ↔ Low Activation
- **High values** = energetic emotions (excitement, rage)
- **Low values** = calm emotions (relaxation, depression)

## Why This Approach?
Unlike traditional emotion classification that uses fixed categories, the circumplex model:
- Captures emotional intensity and nuance
- Allows for smooth transitions between emotional states
- Better represents the continuous nature of human emotions
- Provides context-appropriate emotional understanding for gaming scenarios
Emotions are plotted as coordinates in this circular space.
🎮 Try the Interactive Valence-Arousal Visualizer - [ Click anywhere in the circle to explore different emotional coordinates!](https://valence-arousal-visualizer.vercel.app/)
# Citations
```bibitrix
@dataset{NPC-Valence-Arousal-Prediction-ONNX,
title={Valence and Arousal Annotations for Interactive Characters},
author={Mavdol},
year={2025},
url={https://huggingface.co/Mavdol/NPC-Valence-Arousal-Prediction-ONNX},
note={Based on Russell's Circumplex Model of Affect for NPC emotion recognition}
}
```
|
winstonahsam/life
|
winstonahsam
| 2025-09-22T08:52:32Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-07-02T09:56:48Z |
---
license: apache-2.0
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.