modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-24 00:43:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 573
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-24 00:37:34
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
pepijn223/pi0_libero_fp32
|
pepijn223
| 2025-09-23T11:56:03Z | 117 | 1 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-09T15:22:46Z |
# π₀ - Libero
This is a PyTorch version of the π₀ `pi0_libero` model, converted from the original JAX/Flax implementation.
## Model Details
- **Architecture**: PI0 (Vision-Language-Action model with discrete state input)
- **Model Type**: PI0
- **Domain**: LIBERO (diverse manipulation tasks)
- **Precision**: 32-bit floating point (fp32)
- **Action Dimension**: 32
- **Vision Model**: PaliGemma (gemma_2b)
- **Action Expert**: gemma_300m
## Conversion Details
This model was converted from JAX to PyTorch using the OpenPI conversion script:
```bash
python examples/convert_jax_model_to_pytorch.py \
--checkpoint_dir /pi0_base \
--config_name pi0_libero \
--output_path /pi0_base/pytorch/fp32/ \
--precision float32
```
## Usage
```python
from openpi.models_pytorch.pi0_pytorch import PI0Pytorch
import torch
# Load the model
model = PI0Pytorch.from_pretrained("pepijn223/pi0_libero_fp32")
```
## Model Architecture
The model consists of:
1. **Vision Encoder**: PaliGemma-based vision processing
2. **Language Encoder**: Text prompt understanding
3. **Action Expert**: Specialized network for action prediction
4. **Integration Layer**: Combines multimodal information for action output
## Training Data
This model was trained on robotics datasets appropriate for its domain:
- **DROID models**: Trained on diverse robot manipulation data
- **LIBERO models**: Trained on diverse tabletop manipulation scenarios
- **Base models**: Trained on general robotics datasets
## Limitations
- Model performance depends on similarity between deployment and training environments
- May require domain-specific fine-tuning for optimal performance
- Action space must match the trained action dimension (32)
## Citation
If you use this model, please cite the original OpenPI work:
```bibtex
@article{openpi2024,
title={Open-World Robotic Manipulation with Vision-Language-Action Models},
author={Physical Intelligence},
year={2024},
url={https://github.com/Physical-Intelligence/openpi}
}
```
## Original Repository
[OpenPI GitHub Repository](https://github.com/Physical-Intelligence/openpi)
## License
This model follows the same license as the original OpenPI repository.
|
lots-o/kre-bert
|
lots-o
| 2025-09-23T11:55:13Z | 0 | 0 | null |
[
"safetensors",
"kre",
"custom_code",
"region:us"
] | null | 2025-09-23T11:46:19Z |
- origin: https://huggingface.co/datawhales/korean-relation-extraction
```python
from transformer import AutoTokenizer,AutoModel
tokenizer = AutoTokenizer.from_pretrained("lots-o/kre-bert")
model = AutoModel.from_pretrained("lots-o/kre-bert", trust_remote_code=True)
```
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758628270
|
poolkiltzn
| 2025-09-23T11:52:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T11:52:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Artik1985/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shiny_hibernating_alpaca
|
Artik1985
| 2025-09-23T11:51:07Z | 94 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am shiny_hibernating_alpaca",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T01:29:07Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am shiny_hibernating_alpaca
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rstudioModel/Desi_Mousumi_Roy_Flux1D_loras
|
rstudioModel
| 2025-09-23T11:51:03Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T16:31:06Z |
---
license: apache-2.0
---
|
phospho-app/gr00t-dataset_20250901_A-6eajrfbo7h
|
phospho-app
| 2025-09-23T11:49:57Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"gr00t_n1_5",
"gr00t",
"robotics",
"dataset:sng319521/dataset_20250901_A",
"region:us"
] |
robotics
| 2025-09-23T10:45:03Z |
---
datasets: sng319521/dataset_20250901_A
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t model - 🧪 phosphobot training pipeline
- **Dataset**: [sng319521/dataset_20250901_A](https://huggingface.co/datasets/sng319521/dataset_20250901_A)
- **Wandb run id**: None
## This model was trained using **[🧪phospho](https://phospho.ai)**
Training was successful, try it out on your robot!
## Training parameters
```text
{
"validation_dataset_name": null,
"batch-size": 49,
"num-epochs": 10,
"save-steps": 1000,
"learning_rate": 0.0001,
"data_dir": "/tmp/outputs/data",
"validation_data_dir": "/tmp/outputs/validation_data",
"output_dir": "/tmp/outputs/train"
}
```
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
tomal66/qwen2.5-1.5b-SentNoB-fpt-sft
|
tomal66
| 2025-09-23T11:47:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T11:47:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yuanlinwen/uuu_fine_tune_taipower
|
yuanlinwen
| 2025-09-23T11:45:46Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:56:25Z |
---
license: apache-2.0
---
|
Giangara/Alvien6
|
Giangara
| 2025-09-23T11:45:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T11:44:15Z |
# Install the Hugging Face CLI
pip install -U "huggingface_hub[cli]"
# Login with your Hugging Face credentials
hf auth login
# Push your model files
hf upload Giangara/Alvien6 .
|
aamijar/ReplaceME-Llama-2-5B-lora-r8-sst2-epochs0
|
aamijar
| 2025-09-23T11:45:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T11:45:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kuongan/Halvisobert_finetuned
|
Kuongan
| 2025-09-23T11:43:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:uitnlp/visobert",
"base_model:finetune:uitnlp/visobert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T10:17:20Z |
---
library_name: transformers
base_model: uitnlp/visobert
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Halvisobert_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Halvisobert_finetuned
This model is a fine-tuned version of [uitnlp/visobert](https://huggingface.co/uitnlp/visobert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1758
- Accuracy: 0.5086
- F1 Macro: 0.5094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| No log | 1.0 | 44 | 1.0899 | 0.3843 | 0.3046 |
| 1.1009 | 2.0 | 88 | 1.0683 | 0.4329 | 0.4193 |
| 1.0659 | 3.0 | 132 | 1.0396 | 0.4729 | 0.4713 |
| 0.9957 | 4.0 | 176 | 1.0333 | 0.4871 | 0.4819 |
| 0.8943 | 5.0 | 220 | 1.0599 | 0.4879 | 0.4883 |
| 0.813 | 6.0 | 264 | 1.1583 | 0.4743 | 0.4612 |
| 0.7068 | 7.0 | 308 | 1.1758 | 0.5086 | 0.5094 |
| 0.6142 | 8.0 | 352 | 1.2664 | 0.5 | 0.4934 |
| 0.6142 | 9.0 | 396 | 1.4317 | 0.4757 | 0.4679 |
| 0.5561 | 10.0 | 440 | 1.5167 | 0.4707 | 0.4617 |
| 0.4873 | 11.0 | 484 | 1.5269 | 0.5057 | 0.4971 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
atrost/math_sft_40K_trl_SFT_Regularized-0.95_Normalize-True
|
atrost
| 2025-09-23T11:43:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T21:37:42Z |
---
base_model: Qwen/Qwen3-1.7B-Base
library_name: transformers
model_name: math_sft_40K_trl_SFT_Regularized-0.95_Normalize-True
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for math_sft_40K_trl_SFT_Regularized-0.95_Normalize-True
This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="atrost/math_sft_40K_trl_SFT_Regularized-0.95_Normalize-True", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/astrost-university-of-wisconsin-madison/sft-regularized-sft/runs/hdxmf8mx)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kuldeepshinde1405/herb-anomaly-detector
|
kuldeepshinde1405
| 2025-09-23T11:43:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T11:43:31Z |
# 🌿 Herb Anomaly Detector
This is an autoencoder model trained to detect anomalies in herb quality data.
## Files:
- herb_autoencoder.pth
- scaler.pkl
|
lamekemal/results
|
lamekemal
| 2025-09-23T11:40:24Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T11:40:20Z |
---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: results
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for results
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lamekemal/results", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
scanto/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_pensive_grouse
|
scanto
| 2025-09-23T11:39:50Z | 111 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am bellowing_pensive_grouse",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T20:21:19Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am bellowing_pensive_grouse
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
f1663247/webshop-20
|
f1663247
| 2025-09-23T11:39:15Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-23T09:53:31Z |
# Converted checkpoint
This folder contains a merged Hugging Face model exported from RL checkpoints.
- Format: safetensors
- File: model.safetensors
|
helenabon/hatedemics-v2-llama-0818
|
helenabon
| 2025-09-23T11:37:46Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] |
text-generation
| 2025-09-23T11:35:34Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.1-8B-Instruct
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
pritad/customer-faq-llama-3.2-3B
|
pritad
| 2025-09-23T11:37:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-01-29T08:51:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GaborMadarasz/AstroQA_mamba_epoch2_V6
|
GaborMadarasz
| 2025-09-23T11:35:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mamba",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T11:35:32Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mkenfenheuer/llama-3.2-1B-log-analyzer-Q4_K_M-GGUF
|
mkenfenheuer
| 2025-09-23T11:33:52Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:mkenfenheuer/llama-3.2-1B-log-analyzer",
"base_model:quantized:mkenfenheuer/llama-3.2-1B-log-analyzer",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T11:33:45Z |
---
base_model: mkenfenheuer/llama-3.2-1B-log-analyzer
tags:
- llama-cpp
- gguf-my-repo
---
# mkenfenheuer/llama-3.2-1B-log-analyzer-Q4_K_M-GGUF
This model was converted to GGUF format from [`mkenfenheuer/llama-3.2-1B-log-analyzer`](https://huggingface.co/mkenfenheuer/llama-3.2-1B-log-analyzer) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mkenfenheuer/llama-3.2-1B-log-analyzer) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo mkenfenheuer/llama-3.2-1B-log-analyzer-Q4_K_M-GGUF --hf-file llama-3.2-1b-log-analyzer-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo mkenfenheuer/llama-3.2-1B-log-analyzer-Q4_K_M-GGUF --hf-file llama-3.2-1b-log-analyzer-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo mkenfenheuer/llama-3.2-1B-log-analyzer-Q4_K_M-GGUF --hf-file llama-3.2-1b-log-analyzer-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo mkenfenheuer/llama-3.2-1B-log-analyzer-Q4_K_M-GGUF --hf-file llama-3.2-1b-log-analyzer-q4_k_m.gguf -c 2048
```
|
oliverguhr/gemma-3-1b-german-spelling
|
oliverguhr
| 2025-09-23T11:31:14Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:quantized:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T09:48:22Z |
---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** oliverguhr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MananSuri27/Qwen2.5-7B-Instruct-GRPO-NoMult-ARGUS-20250922_200131
|
MananSuri27
| 2025-09-23T11:30:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T11:28:18Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** MananSuri27
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
analist/gpt-oss-20b-multilingual-reasoner
|
analist
| 2025-09-23T11:23:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T11:23:19Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
datasets: HuggingFaceH4/Multilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- trl
- unsloth
- sft
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [unsloth/gpt-oss-20b-unsloth-bnb-4bit](https://huggingface.co/unsloth/gpt-oss-20b-unsloth-bnb-4bit) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="analist/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.2
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
putassu/gemma-text-to-sql
|
putassu
| 2025-09-23T11:22:56Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T08:44:03Z |
---
base_model: google/gemma-3-1b-pt
library_name: transformers
model_name: gemma-text-to-sql
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-text-to-sql
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="putassu/gemma-text-to-sql", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 3.3.2
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Khoa/winmart-bert-multi-label-0925
|
Khoa
| 2025-09-23T11:19:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T11:14:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MrDave/gemma270m-fiorentino-lora
|
MrDave
| 2025-09-23T11:14:35Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"conversational",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T08:36:50Z |
---
base_model: unsloth/gemma-3-270m-it
library_name: transformers
model_name: gemma270m-fiorentino-lora
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for gemma270m-fiorentino-lora
This model is a fine-tuned version of [unsloth/gemma-3-270m-it](https://huggingface.co/unsloth/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MrDave/gemma270m-fiorentino-lora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.2
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
prasad-sweyaai/finetuned_model_0
|
prasad-sweyaai
| 2025-09-23T11:11:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T11:11:21Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** prasad-sweyaai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tobykim/koelectra-44emotions
|
tobykim
| 2025-09-23T11:10:36Z | 84 | 0 | null |
[
"safetensors",
"electra",
"region:us"
] | null | 2025-09-22T02:49:44Z |
---
{}
---
# Korean Emotion Classification (44 labels, KoELECTRA)
## 📌 개요
이 모델은 한국어 텍스트에서 **44가지 감정(emotion)** 을 분류하기 위해 학습되었습니다.
베이스 모델은 [`monologg/koelectra-base-v3-discriminator`](https://huggingface.co/monologg/koelectra-base-v3-discriminator) 이며,
[KOTE](https://huggingface.co/datasets/searle-j/kote) 데이터셋 + 추가 수집 데이터로 파인튜닝하였습니다.
---
## 🧾 모델 정보
- **Base Model**: KoELECTRA-base-v3-discriminator
- **Task**: Multi-label emotion classification
- **Labels**: 44개 감정
- **Loss Function**: Asymmetric Loss (γ⁻=3)
- **Threshold**: 0.6
---
## 🎯 감정 라벨 (총 44개)
불평/불만, 환영/호의, 감동/감탄, 지긋지긋, 고마움, 슬픔, 화남/분노,
존경, 기대감, 우쭐댐/무시함, 안타까움/실망, 비장함, 의심/불신, 뿌듯함,
편안/쾌적, 신기함/관심, 아껴주는, 부끄러움, 공포/무서움, 절망,
한심함, 역겨움/징그러움, 짜증, 어이없음, 없음, 패배/자기혐오,
귀찮음, 힘듦/지침, 즐거움/신남, 깨달음, 죄책감, 증오/혐오,
흐뭇함(귀여움/예쁨), 당황/난처, 경악, 부담/안_내킴,
서러움, 재미없음, 불쌍함/연민, 놀람, 행복, 불안/걱정, 기쁨, 안심/신뢰
---
## 📊 성능 (Validation 기준)
- **Micro F1**: ~0.62
- **Micro Precision**: ~0.70
- **Micro Recall**: ~0.55
- **Macro F1**: ~0.47
- **Hamming Loss**: ~0.12
---
## 🚀 사용 방법
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np
# 모델 로드
model_name = "tobykim/koelectra-44emotions"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# 입력 문장
text = "오늘 너무 기분 좋아!"
inputs = tokenizer(text, return_tensors="pt")
# 추론
with torch.no_grad():
logits = model(**inputs).logits
probs = torch.sigmoid(logits).numpy()[0]
# 감정 라벨 매핑
LABELS = [
'불평/불만','환영/호의','감동/감탄','지긋지긋',
'고마움','슬픔','화남/분노','존경','기대감',
'우쭐댐/무시함','안타까움/실망','비장함',
'의심/불신','뿌듯함','편안/쾌적','신기함/관심',
'아껴주는','부끄러움','공포/무서움','절망',
'한심함','역겨움/징그러움','짜증','어이없음',
'없음','패배/자기혐오','귀찮음','힘듦/지침',
'즐거움/신남','깨달음','죄책감','증오/혐오',
'흐뭇함(귀여움/예쁨)','당황/난처','경악',
'부담/안_내킴','서러움','재미없음','불쌍함/연민',
'놀람','행복','불안/걱정','기쁨','안심/신뢰'
]
threshold = 0.5
results = [(label, float(p)) for label, p in zip(LABELS, probs) if p > threshold]
print(sorted(results, key=lambda x: x[1], reverse=True))
--
# 🏷️ 라이선스
Base model: KoELECTRA (MIT License)
Dataset: KOTE + 추가 수집 데이터 (공개 데이터 기반)
Model: 자유롭게 연구/학습 목적 사용 가능
|
tamewild/4b_v125_merged_e5
|
tamewild
| 2025-09-23T11:10:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T11:08:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
trikohung/kltn
|
trikohung
| 2025-09-23T11:09:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T10:07:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MariaFGI/distilbert-base-uncased-finetuned-distilbert
|
MariaFGI
| 2025-09-23T11:07:01Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T08:49:07Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-distilbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4983
- Accuracy: 0.9239
- F1: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.298 | 1.0 | 1250 | 0.2646 | 0.9208 | 0.9208 |
| 0.1794 | 2.0 | 2500 | 0.3275 | 0.9212 | 0.9211 |
| 0.1026 | 3.0 | 3750 | 0.3904 | 0.9229 | 0.9228 |
| 0.0554 | 4.0 | 5000 | 0.4486 | 0.9239 | 0.9241 |
| 0.0242 | 5.0 | 6250 | 0.4983 | 0.9239 | 0.9240 |
### Framework versions
- Transformers 4.56.2
- Pytorch 2.8.0+cu126
- Datasets 4.1.1
- Tokenizers 0.22.0
|
uschreiber/llama3.2-10may
|
uschreiber
| 2025-09-23T11:06:09Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-10T09:41:12Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** uschreiber
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
clt013/whisper-small-ft-malay-peft-epoch-20
|
clt013
| 2025-09-23T11:06:05Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"ms",
"dataset:clt013/malay-speech-3k-rows-dataset_v2",
"base_model:openai/whisper-small",
"base_model:adapter:openai/whisper-small",
"license:apache-2.0",
"region:us"
] | null | 2024-10-20T17:55:01Z |
---
library_name: peft
language:
- ms
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- clt013/malay-speech-3k-rows-dataset_v2
model-index:
- name: Whisper Small FT Malay - CLT013
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small FT Malay - CLT013
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Malay Speech 3k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8613
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 2.1842 | 0.3731 | 100 | 0.8172 |
| 0.7488 | 0.7463 | 200 | 0.8014 |
| 0.6424 | 1.1194 | 300 | 0.8136 |
| 0.5234 | 1.4925 | 400 | 0.7511 |
| 0.4951 | 1.8657 | 500 | 0.8203 |
| 0.3835 | 2.2388 | 600 | 0.8191 |
| 0.3519 | 2.6119 | 700 | 0.8001 |
| 0.3868 | 2.9851 | 800 | 0.8011 |
| 0.2568 | 3.3582 | 900 | 0.8630 |
| 0.2781 | 3.7313 | 1000 | 0.8269 |
| 0.2535 | 4.1045 | 1100 | 0.8612 |
| 0.2105 | 4.4776 | 1200 | 0.8486 |
| 0.2104 | 4.8507 | 1300 | 0.8367 |
| 0.1726 | 5.2239 | 1400 | 0.8692 |
| 0.1672 | 5.5970 | 1500 | 0.8483 |
| 0.1641 | 5.9701 | 1600 | 0.8443 |
| 0.1186 | 6.3433 | 1700 | 0.9531 |
| 0.1261 | 6.7164 | 1800 | 0.8578 |
| 0.1211 | 7.0896 | 1900 | 0.8922 |
| 0.0962 | 7.4627 | 2000 | 0.9107 |
| 0.1188 | 7.8358 | 2100 | 0.8498 |
| 0.0847 | 8.2090 | 2200 | 0.8554 |
| 0.0802 | 8.5821 | 2300 | 0.9024 |
| 0.0805 | 8.9552 | 2400 | 0.8649 |
| 0.0559 | 9.3284 | 2500 | 0.8634 |
| 0.053 | 9.7015 | 2600 | 0.8988 |
| 0.0555 | 10.0746 | 2700 | 0.8657 |
| 0.0415 | 10.4478 | 2800 | 0.8449 |
| 0.0401 | 10.8209 | 2900 | 0.8658 |
| 0.0318 | 11.1940 | 3000 | 0.8674 |
| 0.0245 | 11.5672 | 3100 | 0.8491 |
| 0.032 | 11.9403 | 3200 | 0.8694 |
| 0.0186 | 12.3134 | 3300 | 0.8620 |
| 0.0179 | 12.6866 | 3400 | 0.8555 |
| 0.015 | 13.0597 | 3500 | 0.8730 |
| 0.0176 | 13.4328 | 3600 | 0.8458 |
| 0.0155 | 13.8060 | 3700 | 0.8454 |
| 0.0121 | 14.1791 | 3800 | 0.8533 |
| 0.0139 | 14.5522 | 3900 | 0.8604 |
| 0.009 | 14.9254 | 4000 | 0.8676 |
| 0.0095 | 15.2985 | 4100 | 0.8649 |
| 0.0059 | 15.6716 | 4200 | 0.8728 |
| 0.0065 | 16.0448 | 4300 | 0.8570 |
| 0.0049 | 16.4179 | 4400 | 0.8521 |
| 0.0042 | 16.7910 | 4500 | 0.8600 |
| 0.0051 | 17.1642 | 4600 | 0.8741 |
| 0.0037 | 17.5373 | 4700 | 0.8666 |
| 0.0037 | 17.9104 | 4800 | 0.8691 |
| 0.0029 | 18.2836 | 4900 | 0.8619 |
| 0.0023 | 18.6567 | 5000 | 0.8603 |
| 0.0019 | 19.0299 | 5100 | 0.8629 |
| 0.0018 | 19.4030 | 5200 | 0.8608 |
| 0.0018 | 19.7761 | 5300 | 0.8613 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
16dvnk/AaI_mini.plus_alpha.plus_250729_Base
|
16dvnk
| 2025-09-23T11:05:15Z | 0 | 1 |
transformers
|
[
"transformers",
"Self",
"text-generation",
"en",
"dataset:Navanjana/Gutenberg_books",
"dataset:aisuko/simple_english_wikipedia",
"dataset:stas/openwebtext-10k",
"dataset:RaiBP/openwebtext2-first-30-chunks-lang-detect-raw-output",
"dataset:lucadiliello/bookcorpusopen",
"dataset:deepmind/pg19",
"license:cc0-1.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-31T08:46:41Z |
---
license: cc0-1.0
datasets:
- Navanjana/Gutenberg_books
- aisuko/simple_english_wikipedia
- stas/openwebtext-10k
- RaiBP/openwebtext2-first-30-chunks-lang-detect-raw-output
- lucadiliello/bookcorpusopen
- deepmind/pg19
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- Self
model-index:
- name: AaI
results:
- task:
type: text-classification
name: Multiple Choice
dataset:
name: ai2_arc
type: ai2_arc
config: ARC-Easy
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.8
---
## **Safety Concerns**
This model has not passed any safety tuning. We are not responsible for any damages. We updated this model from .pth to .safetensors.
## AaI Introduction
AaI is a model fully made by 16dvnk on his NVIDIA Geforce RTX 4080 Laptop GPU. He trained it for 11 hours straight, and after some tuning, has made this model. The model is made from scratch. He claims the process was a pain, and has taken lots of effort. He named it AaI and not AAI or other variations since he thinks it is an “eyesore”.
## Architecture
The model uses a Generative pre-trained transformer architecture.
## Technical Specifications
| AaI Specs | Details |
|------------------------|----------------------------------------|
| Creator | 16dvnk |
| Hardware | NVIDIA GeForce RTX 4080 Laptop GPU |
| Training Duration | 11 hours |
| Framework | PyTorch |
| Parameter Count | 14 million |
| Model Type | Generative pre-trained transformer |
| Initial Training Year | 2025 |
| Stable Release Status | No stable release as of September 2025|
## Evaluation Results
The model was evaluated on the **ARC-Easy** benchmark (test split).
| Dataset | Split | Metric | Value |
|----------|-------|----------|---------|
| ARC-Easy | test | Accuracy | 0.80% |
## Notes
• All current releases have 14M parameters, which is considered small.
• The model was trained using PyTorch.
• As of September 2025, there is no stable release of AaI.
|
clt013/whisper-large-v3-ft-malay-peft-v1
|
clt013
| 2025-09-23T11:04:54Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"ms",
"dataset:clt013/malay-speech-3k-rows-dataset_v2",
"base_model:openai/whisper-large-v3",
"base_model:adapter:openai/whisper-large-v3",
"license:apache-2.0",
"region:us"
] | null | 2024-10-07T14:08:43Z |
---
base_model: openai/whisper-large-v3
datasets:
- clt013/malay-speech-3k-rows-dataset_v2
language:
- ms
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Whisper Large v3 FT Malay - CLT013
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large v3 FT Malay - CLT013
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Malay Speech 3k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5614 | 0.0933 | 25 | 2.6198 |
| 2.9109 | 0.1866 | 50 | 2.5967 |
| 2.5414 | 0.2799 | 75 | 2.5518 |
| 2.4919 | 0.3731 | 100 | 2.4742 |
| 2.5861 | 0.4664 | 125 | 2.3639 |
| 2.454 | 0.5597 | 150 | 2.2213 |
| 2.32 | 0.6530 | 175 | 2.0616 |
| 2.1081 | 0.7463 | 200 | 1.8668 |
| 1.7976 | 0.8396 | 225 | 1.6736 |
| 1.7597 | 0.9328 | 250 | 1.5280 |
| 1.469 | 1.0261 | 275 | 1.4172 |
| 1.4484 | 1.1194 | 300 | 1.3275 |
| 1.2641 | 1.2127 | 325 | 1.2592 |
| 1.1853 | 1.3060 | 350 | 1.1972 |
| 1.184 | 1.3993 | 375 | 1.1449 |
| 1.1733 | 1.4925 | 400 | 1.0964 |
| 1.0707 | 1.5858 | 425 | 1.0568 |
| 0.9975 | 1.6791 | 450 | 1.0172 |
| 0.9897 | 1.7724 | 475 | 0.9855 |
| 1.0223 | 1.8657 | 500 | 0.9524 |
| 0.875 | 1.9590 | 525 | 0.9232 |
| 0.9242 | 2.0522 | 550 | 0.8968 |
| 0.8829 | 2.1455 | 575 | 0.8709 |
| 0.8491 | 2.2388 | 600 | 0.8454 |
| 0.7793 | 2.3321 | 625 | 0.8236 |
| 0.7733 | 2.4254 | 650 | 0.7993 |
| 0.7085 | 2.5187 | 675 | 0.7787 |
| 0.7403 | 2.6119 | 700 | 0.7596 |
| 0.7019 | 2.7052 | 725 | 0.7415 |
| 0.722 | 2.7985 | 750 | 0.7309 |
| 0.6403 | 2.8918 | 775 | 0.7220 |
| 0.699 | 2.9851 | 800 | 0.7194 |
### Framework versions
- PEFT 0.13.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
DanielPaull/ppo-LunarLander-v2
|
DanielPaull
| 2025-09-23T11:04:16Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-05T01:22:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -105.23 +/- 85.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kemalburakduman/looza
|
kemalburakduman
| 2025-09-23T11:03:57Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-09-23T11:03:57Z |
---
license: bigcode-openrail-m
---
|
valuelight/HiVES-2
|
valuelight
| 2025-09-23T11:00:37Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"qwen3",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:226454",
"loss:HierarchicalAlignLoss",
"arxiv:1908.10084",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-22T08:29:15Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:226454
- loss:HierarchicalAlignLoss
widget:
- source_sentence: Marrying Niclas
sentences:
- duty
- right
- right
- source_sentence: being upset with my partner over gaming
sentences:
- mft
- right
- mft
- source_sentence: having to isolate myself at times from my friends at a Disney trip
sentences:
- pvq
- mft
- pvq
- source_sentence: Yeah we do want to massively reduce/halt Islamic immigration. Far
right parties want to do that. Far right parties like UKIP and the FN definitely
don't want to reduce immigration completely. Farage "we need more Indians and
less Eastern Europeans and Muslims" certainly doesn't want no immigration. Le
Pen extensively campaigned in migrant communities.
sentences:
- duty
- mft
- mft
- source_sentence: stealing a loaf of bread to feed your sister's starving child
sentences:
- mft
- mft
- mft
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- acc_level_1
- acc_level_2
- acc_level_3
- acc_level_dir
- mean_dist_same_l1
- mean_dist_same_l1_dir
- mean_dist_same_l1_diffdir
- mean_dist_same_l1_l2
- mean_dist_same_l1_l2_dir
- mean_dist_same_l1_l2_diffdir
- mean_dist_same_l1_l2_l3
- mean_dist_same_l1_l2_l3_dir
- mean_dist_same_l1_l2_l3_diffdir
- mean_dist_diff_l1
- ratio_mean_dist_same_l1_to_diff_l1
- ratio_mean_dist_same_l1_dir_to_diff_l1
- ratio_mean_dist_same_l1_diffdir_to_diff_l1
- ratio_mean_dist_same_l1_l2_to_diff_l1
- ratio_mean_dist_same_l1_l2_dir_to_diff_l1
- ratio_mean_dist_same_l1_l2_diffdir_to_diff_l1
- ratio_mean_dist_same_l1_l2_l3_to_diff_l1
- ratio_mean_dist_same_l1_l2_l3_dir_to_diff_l1
- ratio_mean_dist_same_l1_l2_l3_diffdir_to_diff_l1
- rank_acc_strict
- rank_acc_pairwise
- sim_corr
model-index:
- name: SentenceTransformer
results:
- task:
type: hierarchical-embedding-evaluator-(ranking-+-distances-+-optional-knn)
name: Hierarchical Embedding Evaluator (ranking + distances + optional KNN)
dataset:
name: hierarchical eval
type: hierarchical_eval
metrics:
- type: acc_level_1
value: 0.0
name: Acc Level 1
- type: acc_level_2
value: 0.0
name: Acc Level 2
- type: acc_level_3
value: 0.0
name: Acc Level 3
- type: acc_level_dir
value: 0.0
name: Acc Level Dir
- type: mean_dist_same_l1
value: 0.39078110456466675
name: Mean Dist Same L1
- type: mean_dist_same_l1_dir
value: 0.36649683117866516
name: Mean Dist Same L1 Dir
- type: mean_dist_same_l1_diffdir
value: 0.4079992175102234
name: Mean Dist Same L1 Diffdir
- type: mean_dist_same_l1_l2
value: 0.4155833125114441
name: Mean Dist Same L1 L2
- type: mean_dist_same_l1_l2_dir
value: 0.4028528928756714
name: Mean Dist Same L1 L2 Dir
- type: mean_dist_same_l1_l2_diffdir
value: 0.42470887303352356
name: Mean Dist Same L1 L2 Diffdir
- type: mean_dist_same_l1_l2_l3
value: 0.374369353055954
name: Mean Dist Same L1 L2 L3
- type: mean_dist_same_l1_l2_l3_dir
value: 0.3517822325229645
name: Mean Dist Same L1 L2 L3 Dir
- type: mean_dist_same_l1_l2_l3_diffdir
value: 0.3947998583316803
name: Mean Dist Same L1 L2 L3 Diffdir
- type: mean_dist_diff_l1
value: 0.43515878915786743
name: Mean Dist Diff L1
- type: ratio_mean_dist_same_l1_to_diff_l1
value: 0.8980195604480798
name: Ratio Mean Dist Same L1 To Diff L1
- type: ratio_mean_dist_same_l1_dir_to_diff_l1
value: 0.8422140154584055
name: Ratio Mean Dist Same L1 Dir To Diff L1
- type: ratio_mean_dist_same_l1_diffdir_to_diff_l1
value: 0.9375869858903595
name: Ratio Mean Dist Same L1 Diffdir To Diff L1
- type: ratio_mean_dist_same_l1_l2_to_diff_l1
value: 0.955015325131531
name: Ratio Mean Dist Same L1 L2 To Diff L1
- type: ratio_mean_dist_same_l1_l2_dir_to_diff_l1
value: 0.92576067153621
name: Ratio Mean Dist Same L1 L2 Dir To Diff L1
- type: ratio_mean_dist_same_l1_l2_diffdir_to_diff_l1
value: 0.9759859702143053
name: Ratio Mean Dist Same L1 L2 Diffdir To Diff L1
- type: ratio_mean_dist_same_l1_l2_l3_to_diff_l1
value: 0.860305163042771
name: Ratio Mean Dist Same L1 L2 L3 To Diff L1
- type: ratio_mean_dist_same_l1_l2_l3_dir_to_diff_l1
value: 0.8083996952095215
name: Ratio Mean Dist Same L1 L2 L3 Dir To Diff L1
- type: ratio_mean_dist_same_l1_l2_l3_diffdir_to_diff_l1
value: 0.9072547037271361
name: Ratio Mean Dist Same L1 L2 L3 Diffdir To Diff L1
- type: rank_acc_strict
value: 0.140625
name: Rank Acc Strict
- type: rank_acc_pairwise
value: 0.6297662976629766
name: Rank Acc Pairwise
- type: sim_corr
value: 0.14557536633780185
name: Sim Corr
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained on the csv dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 32768 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- csv
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 32768, 'do_lower_case': False}) with Transformer model: Qwen3Model
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
"stealing a loaf of bread to feed your sister's starving child",
'mft',
'mft',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Hierarchical Embedding Evaluator (ranking + distances + optional KNN)
* Dataset: `hierarchical_eval`
* Evaluated with <code>evaluator.HierarchicalEvaluator</code> with these parameters:
```json
{
"name": "hierarchical_eval",
"k_knn": 1,
"max_samples": 10000,
"max_rank_anchors": 512,
"require_all_bins": false
}
```
| Metric | Value |
|:-------------------------------------------------|:-----------|
| acc_level_1 | 0.0 |
| acc_level_2 | 0.0 |
| acc_level_3 | 0.0 |
| acc_level_dir | 0.0 |
| mean_dist_same_l1 | 0.3908 |
| mean_dist_same_l1_dir | 0.3665 |
| mean_dist_same_l1_diffdir | 0.408 |
| mean_dist_same_l1_l2 | 0.4156 |
| mean_dist_same_l1_l2_dir | 0.4029 |
| mean_dist_same_l1_l2_diffdir | 0.4247 |
| mean_dist_same_l1_l2_l3 | 0.3744 |
| mean_dist_same_l1_l2_l3_dir | 0.3518 |
| mean_dist_same_l1_l2_l3_diffdir | 0.3948 |
| mean_dist_diff_l1 | 0.4352 |
| ratio_mean_dist_same_l1_to_diff_l1 | 0.898 |
| ratio_mean_dist_same_l1_dir_to_diff_l1 | 0.8422 |
| ratio_mean_dist_same_l1_diffdir_to_diff_l1 | 0.9376 |
| ratio_mean_dist_same_l1_l2_to_diff_l1 | 0.955 |
| ratio_mean_dist_same_l1_l2_dir_to_diff_l1 | 0.9258 |
| ratio_mean_dist_same_l1_l2_diffdir_to_diff_l1 | 0.976 |
| ratio_mean_dist_same_l1_l2_l3_to_diff_l1 | 0.8603 |
| ratio_mean_dist_same_l1_l2_l3_dir_to_diff_l1 | 0.8084 |
| ratio_mean_dist_same_l1_l2_l3_diffdir_to_diff_l1 | 0.9073 |
| rank_acc_strict | 0.1406 |
| **rank_acc_pairwise** | **0.6298** |
| sim_corr | 0.1456 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### csv
* Dataset: csv
* Size: 226,454 training samples
* Columns: <code>text</code>, <code>theory</code>, <code>level_1_idx</code>, <code>level_2_idx</code>, <code>level_3_idx</code>, <code>direction_idx</code>, <code>individual_id</code>, <code>theory_anchor_id</code>, <code>input_ids</code>, <code>attention_mask</code>, and <code>labels</code>
* Approximate statistics based on the first 1000 samples:
| | text | theory | level_1_idx | level_2_idx | level_3_idx | direction_idx | individual_id | theory_anchor_id | input_ids | attention_mask | labels |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------|
| type | string | string | int | int | int | int | int | int | list | list | list |
| details | <ul><li>min: 3 tokens</li><li>mean: 13.4 tokens</li><li>max: 149 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 2.88 tokens</li><li>max: 3 tokens</li></ul> | <ul><li>0: ~9.50%</li><li>1: ~1.00%</li><li>2: ~1.30%</li><li>3: ~20.90%</li><li>4: ~4.70%</li><li>5: ~11.80%</li><li>6: ~2.80%</li><li>7: ~2.30%</li><li>8: ~5.40%</li><li>9: ~4.50%</li><li>10: ~4.60%</li><li>11: ~9.40%</li><li>12: ~5.60%</li><li>13: ~5.60%</li><li>14: ~2.80%</li><li>15: ~5.40%</li><li>16: ~1.10%</li><li>17: ~0.60%</li><li>18: ~0.50%</li><li>19: ~0.20%</li></ul> | <ul><li>0: ~9.20%</li><li>1: ~0.10%</li><li>2: ~17.40%</li><li>3: ~0.40%</li><li>4: ~0.90%</li><li>5: ~0.30%</li><li>6: ~0.90%</li><li>8: ~5.70%</li><li>9: ~2.90%</li><li>10: ~0.60%</li><li>11: ~1.70%</li><li>12: ~1.50%</li><li>13: ~0.70%</li><li>14: ~4.70%</li><li>15: ~2.30%</li><li>16: ~1.30%</li><li>17: ~9.10%</li><li>18: ~1.90%</li><li>19: ~2.10%</li><li>20: ~0.10%</li><li>21: ~0.80%</li><li>22: ~5.20%</li><li>23: ~0.50%</li><li>24: ~0.10%</li><li>25: ~0.60%</li><li>26: ~1.60%</li><li>27: ~1.40%</li><li>28: ~1.10%</li><li>29: ~0.80%</li><li>30: ~0.10%</li><li>31: ~0.30%</li><li>32: ~0.20%</li><li>33: ~0.50%</li><li>34: ~0.90%</li><li>35: ~3.00%</li><li>36: ~6.30%</li><li>37: ~0.30%</li><li>38: ~3.00%</li><li>39: ~0.10%</li><li>40: ~3.00%</li><li>41: ~0.60%</li><li>42: ~1.30%</li><li>43: ~2.40%</li><li>44: ~2.10%</li></ul> | <ul><li>0: ~1.70%</li><li>1: ~3.00%</li><li>3: ~70.00%</li><li>4: ~1.40%</li><li>5: ~0.80%</li><li>6: ~0.70%</li><li>7: ~0.40%</li><li>9: ~0.10%</li><li>10: ~0.30%</li><li>11: ~0.10%</li><li>12: ~0.10%</li><li>15: ~0.20%</li><li>16: ~0.70%</li><li>18: ~0.10%</li><li>19: ~0.10%</li><li>20: ~0.10%</li><li>21: ~0.20%</li><li>22: ~0.10%</li><li>26: ~0.10%</li><li>31: ~2.20%</li><li>32: ~0.80%</li><li>33: ~5.90%</li><li>34: ~1.00%</li><li>35: ~2.00%</li><li>36: ~2.50%</li><li>37: ~0.20%</li><li>38: ~1.70%</li><li>39: ~0.40%</li><li>40: ~1.20%</li><li>41: ~0.90%</li><li>42: ~0.30%</li><li>43: ~0.70%</li></ul> | <ul><li>-1: ~23.80%</li><li>0: ~53.50%</li><li>1: ~22.70%</li></ul> | <ul><li>1: ~0.10%</li><li>2: ~0.20%</li><li>3: ~0.20%</li><li>5: ~1.10%</li><li>6: ~0.20%</li><li>8: ~0.50%</li><li>9: ~0.10%</li><li>10: ~0.30%</li><li>11: ~0.70%</li><li>12: ~0.10%</li><li>14: ~0.10%</li><li>15: ~0.20%</li><li>16: ~0.10%</li><li>17: ~0.20%</li><li>18: ~0.70%</li><li>19: ~0.40%</li><li>20: ~0.20%</li><li>21: ~0.40%</li><li>23: ~0.30%</li><li>24: ~0.30%</li><li>25: ~0.10%</li><li>26: ~0.50%</li><li>27: ~0.50%</li><li>28: ~0.10%</li><li>29: ~0.10%</li><li>31: ~0.10%</li><li>34: ~0.40%</li><li>37: ~0.10%</li><li>38: ~0.20%</li><li>40: ~0.10%</li><li>41: ~0.20%</li><li>42: ~0.30%</li><li>47: ~0.20%</li><li>48: ~0.10%</li><li>49: ~0.20%</li><li>50: ~0.10%</li><li>53: ~0.20%</li><li>55: ~0.50%</li><li>59: ~0.10%</li><li>61: ~0.20%</li><li>62: ~0.10%</li><li>63: ~0.10%</li><li>65: ~0.20%</li><li>66: ~0.10%</li><li>71: ~0.10%</li><li>73: ~0.10%</li><li>83: ~0.10%</li><li>87: ~0.10%</li><li>92: ~0.10%</li><li>97: ~0.10%</li><li>106: ~0.10%</li><li>107: ~0.10%</li><li>108: ~1.40%</li><li>109: ~1.20%</li><li>110: ~0.80%</li><li>111: ~0.10%</li><li>112: ~1.20%</li><li>113: ~1.60%</li><li>114: ~0.10%</li><li>115: ~0.10%</li><li>116: ~1.10%</li><li>117: ~0.40%</li><li>118: ~1.80%</li><li>119: ~1.50%</li><li>120: ~0.40%</li><li>121: ~2.50%</li><li>122: ~1.10%</li><li>123: ~0.50%</li><li>124: ~0.20%</li><li>125: ~0.10%</li><li>126: ~2.40%</li><li>127: ~0.10%</li><li>128: ~0.60%</li><li>129: ~0.80%</li><li>130: ~0.30%</li><li>131: ~0.50%</li><li>132: ~0.70%</li><li>133: ~0.40%</li><li>134: ~1.50%</li><li>135: ~1.90%</li><li>136: ~0.10%</li><li>137: ~0.80%</li><li>138: ~1.70%</li><li>139: ~0.60%</li><li>140: ~0.30%</li><li>141: ~2.40%</li><li>142: ~1.00%</li><li>143: ~1.90%</li><li>144: ~0.20%</li><li>145: ~0.20%</li><li>146: ~0.50%</li><li>147: ~0.10%</li><li>148: ~1.90%</li><li>149: ~2.60%</li><li>150: ~0.40%</li><li>151: ~0.10%</li><li>152: ~0.40%</li><li>153: ~0.30%</li><li>154: ~0.30%</li><li>155: ~0.10%</li><li>156: ~1.10%</li><li>157: ~1.50%</li><li>158: ~0.30%</li><li>159: ~0.60%</li><li>160: ~0.10%</li><li>161: ~1.30%</li><li>162: ~1.50%</li><li>164: ~2.60%</li><li>165: ~0.80%</li><li>166: ~0.10%</li><li>167: ~0.20%</li><li>168: ~0.90%</li><li>169: ~0.10%</li><li>171: ~0.10%</li><li>172: ~0.40%</li><li>173: ~0.20%</li><li>174: ~0.30%</li><li>175: ~0.80%</li><li>176: ~0.20%</li><li>177: ~0.20%</li><li>178: ~0.50%</li><li>179: ~0.20%</li><li>180: ~0.90%</li><li>181: ~0.10%</li><li>182: ~0.30%</li><li>183: ~0.10%</li><li>184: ~1.60%</li><li>185: ~0.80%</li><li>186: ~0.30%</li><li>187: ~1.10%</li><li>188: ~0.10%</li><li>189: ~0.20%</li><li>190: ~0.10%</li><li>191: ~0.30%</li><li>192: ~0.10%</li><li>193: ~0.40%</li><li>195: ~0.20%</li><li>196: ~0.10%</li><li>197: ~0.10%</li><li>198: ~0.10%</li><li>199: ~0.50%</li><li>200: ~0.90%</li><li>202: ~0.30%</li><li>204: ~0.70%</li><li>205: ~0.30%</li><li>208: ~0.10%</li><li>210: ~0.80%</li><li>211: ~0.20%</li><li>213: ~1.00%</li><li>214: ~0.50%</li><li>216: ~1.00%</li><li>217: ~0.40%</li><li>218: ~0.20%</li><li>219: ~0.20%</li><li>220: ~0.10%</li><li>221: ~0.10%</li><li>224: ~0.10%</li><li>227: ~0.10%</li><li>231: ~0.10%</li><li>232: ~0.20%</li><li>233: ~1.00%</li><li>234: ~0.10%</li><li>237: ~0.10%</li><li>238: ~0.10%</li><li>239: ~0.40%</li><li>241: ~0.20%</li><li>248: ~0.10%</li><li>249: ~0.20%</li><li>250: ~0.10%</li><li>257: ~0.20%</li><li>258: ~0.30%</li><li>264: ~0.70%</li><li>265: ~0.10%</li><li>266: ~0.30%</li><li>268: ~0.30%</li><li>269: ~0.10%</li><li>271: ~0.10%</li><li>272: ~0.10%</li><li>273: ~0.20%</li><li>274: ~0.10%</li><li>275: ~0.20%</li><li>276: ~0.40%</li><li>277: ~0.10%</li><li>278: ~0.30%</li><li>279: ~0.10%</li><li>282: ~0.10%</li><li>283: ~0.90%</li><li>288: ~0.10%</li><li>290: ~0.10%</li><li>293: ~0.10%</li><li>294: ~0.10%</li><li>295: ~0.10%</li><li>296: ~0.10%</li><li>297: ~0.50%</li><li>299: ~0.10%</li><li>301: ~0.10%</li><li>302: ~0.10%</li><li>303: ~0.20%</li><li>306: ~0.10%</li><li>314: ~0.30%</li><li>317: ~0.40%</li><li>318: ~0.10%</li><li>319: ~0.20%</li><li>320: ~0.10%</li><li>322: ~0.50%</li><li>324: ~0.10%</li><li>326: ~0.40%</li><li>329: ~0.20%</li><li>330: ~0.10%</li><li>331: ~0.20%</li><li>332: ~0.20%</li><li>334: ~0.50%</li><li>336: ~0.70%</li><li>337: ~0.20%</li><li>338: ~0.10%</li><li>342: ~0.10%</li><li>345: ~0.20%</li><li>349: ~0.10%</li><li>350: ~0.10%</li><li>351: ~0.10%</li><li>358: ~0.10%</li><li>360: ~0.30%</li><li>362: ~0.10%</li><li>366: ~0.30%</li><li>369: ~0.20%</li><li>370: ~0.10%</li><li>372: ~0.30%</li><li>374: ~0.30%</li><li>375: ~0.10%</li><li>378: ~0.20%</li><li>382: ~0.10%</li><li>383: ~0.10%</li><li>384: ~0.10%</li><li>387: ~0.10%</li><li>388: ~0.10%</li><li>389: ~0.10%</li><li>390: ~0.10%</li><li>391: ~0.10%</li><li>392: ~0.10%</li><li>394: ~0.20%</li><li>395: ~0.60%</li><li>397: ~0.10%</li><li>399: ~0.40%</li><li>400: ~0.30%</li><li>401: ~0.10%</li></ul> | <ul><li>0: ~1.10%</li><li>1: ~0.70%</li><li>2: ~2.10%</li><li>3: ~0.20%</li><li>4: ~0.10%</li><li>5: ~1.00%</li><li>6: ~0.30%</li><li>7: ~0.30%</li><li>8: ~1.40%</li><li>10: ~0.90%</li><li>11: ~1.30%</li><li>12: ~0.20%</li><li>13: ~1.80%</li><li>14: ~1.00%</li><li>15: ~1.50%</li><li>16: ~0.80%</li><li>17: ~1.80%</li><li>18: ~0.20%</li><li>19: ~1.10%</li><li>20: ~1.70%</li><li>21: ~1.90%</li><li>22: ~0.50%</li><li>23: ~1.80%</li><li>24: ~0.40%</li><li>25: ~0.10%</li><li>26: ~0.10%</li><li>27: ~0.10%</li><li>28: ~0.40%</li><li>29: ~0.30%</li><li>30: ~0.60%</li><li>31: ~0.10%</li><li>32: ~0.40%</li><li>33: ~0.70%</li><li>34: ~0.50%</li><li>35: ~0.60%</li><li>36: ~1.10%</li><li>37: ~0.70%</li><li>38: ~0.50%</li><li>39: ~0.40%</li><li>40: ~0.20%</li><li>41: ~3.00%</li><li>42: ~0.10%</li><li>43: ~0.40%</li><li>44: ~0.60%</li><li>45: ~0.50%</li><li>47: ~1.10%</li><li>48: ~0.40%</li><li>49: ~0.20%</li><li>50: ~0.30%</li><li>51: ~0.90%</li><li>52: ~0.20%</li><li>53: ~0.10%</li><li>54: ~0.80%</li><li>55: ~1.10%</li><li>56: ~1.00%</li><li>57: ~0.50%</li><li>58: ~0.60%</li><li>59: ~0.50%</li><li>60: ~0.50%</li><li>61: ~0.30%</li><li>62: ~1.30%</li><li>63: ~1.30%</li><li>64: ~0.40%</li><li>65: ~0.20%</li><li>67: ~1.30%</li><li>68: ~0.60%</li><li>69: ~1.40%</li><li>70: ~0.70%</li><li>71: ~0.50%</li><li>72: ~0.50%</li><li>73: ~0.40%</li><li>74: ~0.70%</li><li>75: ~1.70%</li><li>76: ~0.10%</li><li>77: ~0.10%</li><li>78: ~0.30%</li><li>79: ~0.10%</li><li>80: ~0.30%</li><li>81: ~0.40%</li><li>83: ~0.30%</li><li>84: ~0.40%</li><li>85: ~0.10%</li><li>86: ~0.10%</li><li>87: ~0.60%</li><li>88: ~0.60%</li><li>89: ~0.10%</li><li>90: ~2.40%</li><li>91: ~0.70%</li><li>92: ~0.70%</li><li>93: ~0.50%</li><li>94: ~0.50%</li><li>95: ~0.60%</li><li>96: ~0.10%</li><li>98: ~0.10%</li><li>99: ~0.50%</li><li>100: ~0.30%</li><li>101: ~0.20%</li><li>102: ~0.40%</li><li>103: ~0.40%</li><li>104: ~1.50%</li><li>105: ~0.70%</li><li>106: ~0.20%</li><li>107: ~0.20%</li><li>108: ~0.20%</li><li>109: ~0.30%</li><li>110: ~0.10%</li><li>111: ~0.40%</li><li>113: ~1.60%</li><li>115: ~0.30%</li><li>116: ~0.80%</li><li>117: ~0.30%</li><li>118: ~0.30%</li><li>119: ~0.60%</li><li>120: ~0.20%</li><li>121: ~0.90%</li><li>122: ~0.60%</li><li>123: ~0.50%</li><li>125: ~0.10%</li><li>126: ~0.50%</li><li>127: ~0.30%</li><li>128: ~0.40%</li><li>129: ~0.50%</li><li>130: ~0.20%</li><li>131: ~0.40%</li><li>132: ~0.20%</li><li>133: ~0.10%</li><li>134: ~0.20%</li><li>135: ~0.30%</li><li>136: ~0.10%</li><li>137: ~0.40%</li><li>138: ~0.90%</li><li>139: ~0.30%</li><li>141: ~0.20%</li><li>142: ~0.40%</li><li>143: ~0.30%</li><li>144: ~0.80%</li><li>145: ~0.60%</li><li>147: ~0.30%</li><li>148: ~0.60%</li><li>149: ~0.20%</li><li>151: ~0.20%</li><li>152: ~0.10%</li><li>153: ~0.50%</li><li>154: ~0.20%</li><li>155: ~0.10%</li><li>156: ~1.10%</li><li>158: ~0.10%</li><li>159: ~0.10%</li><li>160: ~0.20%</li><li>162: ~0.10%</li><li>163: ~0.30%</li><li>164: ~0.40%</li><li>165: ~0.10%</li><li>166: ~0.10%</li><li>167: ~0.40%</li><li>168: ~0.10%</li><li>169: ~0.30%</li><li>170: ~0.10%</li><li>171: ~0.10%</li><li>172: ~0.20%</li><li>173: ~0.50%</li><li>174: ~0.10%</li><li>176: ~0.40%</li><li>177: ~0.10%</li><li>180: ~0.20%</li><li>182: ~0.50%</li><li>184: ~0.10%</li><li>185: ~0.20%</li><li>187: ~0.20%</li><li>189: ~0.90%</li><li>191: ~0.10%</li><li>194: ~0.40%</li><li>195: ~0.40%</li><li>196: ~0.10%</li><li>197: ~0.20%</li><li>199: ~0.20%</li><li>202: ~0.60%</li><li>203: ~0.10%</li><li>204: ~0.10%</li><li>205: ~0.10%</li><li>206: ~0.40%</li><li>207: ~0.20%</li><li>208: ~0.10%</li><li>210: ~0.30%</li><li>215: ~0.40%</li><li>216: ~0.20%</li><li>219: ~0.20%</li><li>220: ~0.20%</li><li>222: ~0.50%</li><li>225: ~0.30%</li><li>227: ~0.10%</li><li>229: ~0.20%</li><li>232: ~0.10%</li><li>233: ~0.10%</li><li>236: ~0.20%</li><li>238: ~0.10%</li><li>239: ~0.10%</li><li>241: ~0.10%</li><li>244: ~0.20%</li><li>247: ~0.10%</li><li>248: ~0.10%</li><li>249: ~0.10%</li><li>250: ~0.40%</li><li>253: ~0.10%</li><li>254: ~0.10%</li><li>257: ~0.10%</li><li>260: ~0.10%</li><li>262: ~0.10%</li><li>264: ~0.20%</li><li>267: ~0.20%</li><li>268: ~0.10%</li><li>269: ~0.10%</li></ul> | <ul><li>size: 256 elements</li></ul> | <ul><li>size: 256 elements</li></ul> | <ul><li>size: 6 elements</li></ul> |
* Samples:
| text | theory | level_1_idx | level_2_idx | level_3_idx | direction_idx | individual_id | theory_anchor_id | input_ids | attention_mask | labels |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|:----------------|:----------------|:----------------|:---------------|:-----------------|:-----------------|:-----------------------------------------------------------|:----------------------------------|:---------------------------------------|
| <code>helping jews</code> | <code>mft</code> | <code>3</code> | <code>8</code> | <code>3</code> | <code>0</code> | <code>164</code> | <code>142</code> | <code>[151643, 151643, 151643, 151643, 151643, ...]</code> | <code>[0, 0, 0, 0, 0, ...]</code> | <code>[3, 8, 3, 0, 164, ...]</code> |
| <code>I lose all desire for a girl once I've slept with them.</code> | <code>pvq</code> | <code>10</code> | <code>42</code> | <code>3</code> | <code>0</code> | <code>110</code> | <code>8</code> | <code>[151643, 151643, 151643, 151643, 151643, ...]</code> | <code>[0, 0, 0, 0, 0, ...]</code> | <code>[10, 42, 3, 0, 110, ...]</code> |
| <code>we shouldn't adopt libertarianism because not everyone seems to think about things in the same way, some people have no interest in politics at all</code> | <code>pvq</code> | <code>12</code> | <code>44</code> | <code>40</code> | <code>0</code> | <code>180</code> | <code>171</code> | <code>[151643, 151643, 151643, 151643, 151643, ...]</code> | <code>[0, 0, 0, 0, 0, ...]</code> | <code>[12, 44, 40, 0, 180, ...]</code> |
* Loss: <code>loss.HierarchicalAlignLoss</code>
### Evaluation Dataset
#### csv
* Dataset: csv
* Size: 25,162 evaluation samples
* Columns: <code>text</code>, <code>theory</code>, <code>level_1_idx</code>, <code>level_2_idx</code>, <code>level_3_idx</code>, <code>direction_idx</code>, <code>individual_id</code>, <code>theory_anchor_id</code>, <code>input_ids</code>, <code>attention_mask</code>, and <code>labels</code>
* Approximate statistics based on the first 1000 samples:
| | text | theory | level_1_idx | level_2_idx | level_3_idx | direction_idx | individual_id | theory_anchor_id | input_ids | attention_mask | labels |
|:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------|
| type | string | string | int | int | int | int | int | int | list | list | list |
| details | <ul><li>min: 3 tokens</li><li>mean: 14.14 tokens</li><li>max: 136 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 2.88 tokens</li><li>max: 3 tokens</li></ul> | <ul><li>0: ~9.80%</li><li>1: ~0.60%</li><li>2: ~1.70%</li><li>3: ~24.20%</li><li>4: ~5.90%</li><li>5: ~12.20%</li><li>6: ~3.50%</li><li>7: ~2.90%</li><li>8: ~5.10%</li><li>9: ~3.20%</li><li>10: ~4.30%</li><li>11: ~7.30%</li><li>12: ~5.70%</li><li>13: ~5.30%</li><li>14: ~2.20%</li><li>15: ~3.60%</li><li>16: ~0.90%</li><li>17: ~0.50%</li><li>18: ~1.10%</li></ul> | <ul><li>0: ~9.30%</li><li>1: ~0.40%</li><li>2: ~14.70%</li><li>3: ~0.40%</li><li>4: ~0.20%</li><li>5: ~0.50%</li><li>6: ~1.30%</li><li>8: ~5.80%</li><li>9: ~3.30%</li><li>10: ~1.00%</li><li>11: ~1.40%</li><li>12: ~1.10%</li><li>13: ~0.70%</li><li>14: ~5.10%</li><li>15: ~2.50%</li><li>16: ~1.30%</li><li>17: ~11.60%</li><li>18: ~1.60%</li><li>19: ~2.20%</li><li>20: ~0.40%</li><li>21: ~1.10%</li><li>22: ~6.40%</li><li>23: ~1.00%</li><li>24: ~0.10%</li><li>25: ~1.00%</li><li>26: ~1.50%</li><li>27: ~1.00%</li><li>28: ~0.80%</li><li>29: ~0.90%</li><li>30: ~0.80%</li><li>31: ~0.10%</li><li>32: ~0.20%</li><li>33: ~0.20%</li><li>34: ~0.70%</li><li>35: ~2.50%</li><li>36: ~4.20%</li><li>37: ~0.50%</li><li>38: ~1.90%</li><li>39: ~0.30%</li><li>40: ~2.80%</li><li>41: ~0.60%</li><li>42: ~1.30%</li><li>43: ~3.00%</li><li>44: ~2.30%</li></ul> | <ul><li>0: ~2.40%</li><li>1: ~2.90%</li><li>2: ~0.40%</li><li>3: ~72.70%</li><li>4: ~1.40%</li><li>5: ~1.10%</li><li>6: ~0.30%</li><li>7: ~0.10%</li><li>8: ~0.20%</li><li>10: ~0.50%</li><li>11: ~0.10%</li><li>13: ~0.20%</li><li>15: ~0.10%</li><li>16: ~0.60%</li><li>17: ~0.10%</li><li>20: ~0.10%</li><li>21: ~0.10%</li><li>31: ~1.70%</li><li>32: ~0.80%</li><li>33: ~4.10%</li><li>34: ~0.50%</li><li>35: ~1.40%</li><li>36: ~2.00%</li><li>37: ~0.20%</li><li>38: ~1.80%</li><li>39: ~0.10%</li><li>40: ~1.20%</li><li>41: ~1.10%</li><li>42: ~0.60%</li><li>43: ~1.20%</li></ul> | <ul><li>-1: ~23.50%</li><li>0: ~52.20%</li><li>1: ~24.30%</li></ul> | <ul><li>2: ~0.40%</li><li>3: ~0.30%</li><li>5: ~1.30%</li><li>6: ~0.20%</li><li>8: ~0.30%</li><li>9: ~0.10%</li><li>10: ~0.30%</li><li>11: ~1.00%</li><li>12: ~0.20%</li><li>15: ~0.10%</li><li>17: ~0.20%</li><li>18: ~0.50%</li><li>19: ~0.20%</li><li>20: ~0.20%</li><li>21: ~0.30%</li><li>23: ~0.30%</li><li>24: ~0.20%</li><li>26: ~0.30%</li><li>27: ~0.60%</li><li>28: ~0.10%</li><li>29: ~0.10%</li><li>30: ~0.20%</li><li>31: ~0.10%</li><li>32: ~0.30%</li><li>33: ~0.30%</li><li>34: ~0.70%</li><li>35: ~0.10%</li><li>38: ~0.10%</li><li>40: ~0.10%</li><li>41: ~0.20%</li><li>42: ~0.10%</li><li>45: ~0.10%</li><li>47: ~0.30%</li><li>48: ~0.10%</li><li>51: ~0.10%</li><li>52: ~0.10%</li><li>53: ~0.30%</li><li>55: ~0.10%</li><li>62: ~0.40%</li><li>64: ~0.10%</li><li>65: ~0.20%</li><li>68: ~0.20%</li><li>69: ~0.10%</li><li>74: ~0.10%</li><li>77: ~0.10%</li><li>86: ~0.10%</li><li>87: ~0.10%</li><li>89: ~0.10%</li><li>91: ~0.10%</li><li>106: ~0.40%</li><li>107: ~0.40%</li><li>108: ~1.40%</li><li>109: ~1.00%</li><li>110: ~1.30%</li><li>111: ~0.20%</li><li>112: ~1.50%</li><li>113: ~1.80%</li><li>114: ~0.10%</li><li>115: ~0.20%</li><li>116: ~0.70%</li><li>117: ~0.50%</li><li>118: ~2.50%</li><li>119: ~0.90%</li><li>120: ~0.10%</li><li>121: ~2.80%</li><li>122: ~1.00%</li><li>123: ~0.20%</li><li>124: ~0.20%</li><li>126: ~3.60%</li><li>127: ~0.50%</li><li>128: ~0.70%</li><li>129: ~0.60%</li><li>130: ~0.10%</li><li>131: ~1.40%</li><li>132: ~1.10%</li><li>133: ~0.10%</li><li>134: ~0.90%</li><li>135: ~1.40%</li><li>136: ~0.10%</li><li>137: ~1.30%</li><li>138: ~0.90%</li><li>139: ~1.10%</li><li>140: ~0.80%</li><li>141: ~2.20%</li><li>142: ~1.30%</li><li>143: ~1.80%</li><li>144: ~0.20%</li><li>145: ~0.10%</li><li>146: ~1.00%</li><li>147: ~0.30%</li><li>148: ~2.20%</li><li>149: ~2.40%</li><li>150: ~0.40%</li><li>152: ~0.70%</li><li>153: ~0.20%</li><li>154: ~1.10%</li><li>155: ~0.30%</li><li>156: ~0.90%</li><li>157: ~1.20%</li><li>158: ~0.10%</li><li>159: ~0.80%</li><li>160: ~0.20%</li><li>161: ~1.30%</li><li>162: ~1.10%</li><li>163: ~0.10%</li><li>164: ~2.30%</li><li>165: ~0.60%</li><li>166: ~0.10%</li><li>167: ~0.20%</li><li>168: ~0.50%</li><li>170: ~0.10%</li><li>171: ~0.20%</li><li>172: ~0.40%</li><li>173: ~0.40%</li><li>174: ~0.20%</li><li>175: ~0.90%</li><li>176: ~0.20%</li><li>177: ~0.10%</li><li>178: ~0.20%</li><li>179: ~0.50%</li><li>180: ~0.20%</li><li>182: ~0.20%</li><li>183: ~0.10%</li><li>184: ~2.00%</li><li>186: ~0.10%</li><li>187: ~0.50%</li><li>189: ~0.20%</li><li>190: ~0.70%</li><li>191: ~0.20%</li><li>192: ~0.20%</li><li>193: ~0.50%</li><li>195: ~0.20%</li><li>196: ~0.10%</li><li>197: ~0.30%</li><li>199: ~0.60%</li><li>200: ~0.90%</li><li>201: ~0.10%</li><li>202: ~0.40%</li><li>203: ~0.50%</li><li>204: ~0.20%</li><li>205: ~0.20%</li><li>206: ~0.10%</li><li>208: ~0.20%</li><li>210: ~1.10%</li><li>211: ~0.10%</li><li>213: ~0.70%</li><li>214: ~0.10%</li><li>215: ~0.20%</li><li>216: ~1.20%</li><li>217: ~0.40%</li><li>218: ~0.10%</li><li>219: ~0.10%</li><li>223: ~0.10%</li><li>224: ~0.30%</li><li>225: ~0.10%</li><li>227: ~0.30%</li><li>228: ~0.10%</li><li>230: ~0.10%</li><li>233: ~0.80%</li><li>239: ~0.90%</li><li>241: ~0.10%</li><li>242: ~0.10%</li><li>247: ~0.10%</li><li>250: ~0.10%</li><li>257: ~0.10%</li><li>258: ~0.10%</li><li>260: ~0.30%</li><li>261: ~0.10%</li><li>262: ~0.20%</li><li>264: ~1.20%</li><li>266: ~0.30%</li><li>271: ~0.20%</li><li>272: ~0.30%</li><li>274: ~0.20%</li><li>276: ~0.20%</li><li>277: ~0.50%</li><li>278: ~0.30%</li><li>280: ~0.10%</li><li>283: ~0.40%</li><li>286: ~0.20%</li><li>287: ~0.20%</li><li>288: ~0.10%</li><li>290: ~0.10%</li><li>295: ~0.30%</li><li>297: ~0.30%</li><li>300: ~0.10%</li><li>302: ~0.10%</li><li>303: ~0.10%</li><li>306: ~0.10%</li><li>311: ~0.10%</li><li>312: ~0.10%</li><li>314: ~0.10%</li><li>317: ~0.20%</li><li>318: ~0.10%</li><li>319: ~0.20%</li><li>322: ~0.20%</li><li>323: ~0.20%</li><li>324: ~0.20%</li><li>326: ~0.10%</li><li>327: ~0.10%</li><li>329: ~0.20%</li><li>331: ~0.10%</li><li>332: ~0.20%</li><li>334: ~0.30%</li><li>335: ~0.10%</li><li>336: ~0.20%</li><li>337: ~0.20%</li><li>343: ~0.10%</li><li>345: ~0.10%</li><li>350: ~0.30%</li><li>360: ~0.10%</li><li>364: ~0.10%</li><li>366: ~0.20%</li><li>369: ~0.20%</li><li>371: ~0.20%</li><li>373: ~0.10%</li><li>375: ~0.20%</li><li>376: ~0.10%</li><li>377: ~0.40%</li><li>378: ~0.50%</li><li>379: ~0.10%</li><li>380: ~0.10%</li><li>384: ~0.10%</li><li>387: ~0.30%</li><li>390: ~0.20%</li><li>395: ~0.40%</li><li>397: ~0.10%</li><li>399: ~0.40%</li><li>400: ~0.20%</li></ul> | <ul><li>0: ~1.40%</li><li>1: ~0.40%</li><li>2: ~2.00%</li><li>3: ~0.30%</li><li>4: ~0.20%</li><li>5: ~0.80%</li><li>6: ~0.30%</li><li>7: ~0.30%</li><li>8: ~1.50%</li><li>9: ~0.10%</li><li>10: ~0.80%</li><li>11: ~2.00%</li><li>12: ~0.20%</li><li>13: ~1.50%</li><li>14: ~0.60%</li><li>15: ~1.30%</li><li>16: ~0.50%</li><li>17: ~2.50%</li><li>18: ~0.10%</li><li>19: ~1.50%</li><li>20: ~2.40%</li><li>21: ~1.20%</li><li>22: ~0.40%</li><li>23: ~2.30%</li><li>24: ~0.70%</li><li>25: ~0.20%</li><li>27: ~0.20%</li><li>28: ~0.60%</li><li>29: ~0.40%</li><li>30: ~1.50%</li><li>31: ~0.20%</li><li>32: ~0.50%</li><li>33: ~1.30%</li><li>34: ~0.80%</li><li>35: ~0.80%</li><li>36: ~0.40%</li><li>37: ~0.90%</li><li>38: ~1.10%</li><li>39: ~0.40%</li><li>40: ~0.20%</li><li>41: ~2.60%</li><li>42: ~0.70%</li><li>43: ~0.70%</li><li>44: ~0.60%</li><li>45: ~0.70%</li><li>47: ~0.50%</li><li>48: ~0.50%</li><li>50: ~0.30%</li><li>51: ~0.40%</li><li>52: ~0.10%</li><li>54: ~0.60%</li><li>55: ~0.50%</li><li>56: ~1.00%</li><li>58: ~0.20%</li><li>59: ~0.10%</li><li>60: ~0.40%</li><li>61: ~0.50%</li><li>62: ~1.30%</li><li>63: ~1.10%</li><li>64: ~0.40%</li><li>65: ~0.60%</li><li>66: ~0.10%</li><li>67: ~1.60%</li><li>68: ~0.60%</li><li>69: ~0.70%</li><li>70: ~0.20%</li><li>71: ~0.50%</li><li>72: ~0.20%</li><li>73: ~0.50%</li><li>74: ~0.60%</li><li>75: ~1.20%</li><li>76: ~0.30%</li><li>77: ~0.20%</li><li>78: ~0.50%</li><li>80: ~0.40%</li><li>81: ~0.70%</li><li>83: ~0.20%</li><li>87: ~0.40%</li><li>88: ~0.60%</li><li>89: ~0.30%</li><li>90: ~2.50%</li><li>91: ~0.20%</li><li>92: ~0.70%</li><li>93: ~1.10%</li><li>94: ~0.60%</li><li>95: ~0.50%</li><li>99: ~0.60%</li><li>100: ~0.10%</li><li>102: ~0.40%</li><li>103: ~0.50%</li><li>104: ~1.20%</li><li>105: ~0.10%</li><li>106: ~0.40%</li><li>107: ~0.30%</li><li>108: ~0.10%</li><li>109: ~0.20%</li><li>110: ~0.10%</li><li>111: ~0.20%</li><li>112: ~0.10%</li><li>113: ~1.20%</li><li>114: ~0.10%</li><li>115: ~0.10%</li><li>116: ~0.80%</li><li>117: ~0.40%</li><li>118: ~0.40%</li><li>119: ~1.30%</li><li>120: ~0.10%</li><li>121: ~1.00%</li><li>122: ~0.40%</li><li>123: ~0.10%</li><li>124: ~0.10%</li><li>125: ~0.10%</li><li>126: ~0.40%</li><li>127: ~0.10%</li><li>128: ~0.10%</li><li>129: ~0.80%</li><li>130: ~0.20%</li><li>131: ~0.40%</li><li>132: ~0.20%</li><li>133: ~0.10%</li><li>134: ~0.30%</li><li>135: ~0.30%</li><li>136: ~0.70%</li><li>137: ~0.20%</li><li>138: ~0.90%</li><li>139: ~0.10%</li><li>140: ~0.10%</li><li>141: ~0.50%</li><li>142: ~0.20%</li><li>143: ~0.50%</li><li>144: ~0.50%</li><li>145: ~0.90%</li><li>147: ~0.20%</li><li>148: ~0.50%</li><li>149: ~0.40%</li><li>150: ~0.30%</li><li>151: ~0.20%</li><li>153: ~0.10%</li><li>154: ~0.20%</li><li>155: ~0.20%</li><li>156: ~0.40%</li><li>158: ~0.10%</li><li>159: ~0.10%</li><li>160: ~0.40%</li><li>161: ~0.10%</li><li>162: ~0.10%</li><li>163: ~0.30%</li><li>164: ~0.20%</li><li>165: ~0.10%</li><li>166: ~0.10%</li><li>167: ~0.10%</li><li>168: ~0.20%</li><li>171: ~0.10%</li><li>172: ~0.30%</li><li>173: ~0.30%</li><li>174: ~0.30%</li><li>175: ~0.10%</li><li>176: ~0.30%</li><li>177: ~0.20%</li><li>179: ~0.20%</li><li>182: ~0.40%</li><li>183: ~0.30%</li><li>184: ~0.10%</li><li>185: ~0.60%</li><li>186: ~0.10%</li><li>187: ~0.10%</li><li>188: ~0.20%</li><li>189: ~0.90%</li><li>191: ~0.60%</li><li>192: ~0.10%</li><li>193: ~0.30%</li><li>194: ~0.10%</li><li>195: ~0.30%</li><li>197: ~0.10%</li><li>199: ~0.30%</li><li>200: ~0.40%</li><li>202: ~0.90%</li><li>203: ~0.20%</li><li>204: ~0.10%</li><li>206: ~0.20%</li><li>207: ~0.20%</li><li>209: ~0.10%</li><li>210: ~0.40%</li><li>212: ~0.20%</li><li>213: ~0.10%</li><li>214: ~0.10%</li><li>215: ~0.20%</li><li>216: ~0.20%</li><li>218: ~0.10%</li><li>219: ~0.10%</li><li>220: ~0.20%</li><li>222: ~0.10%</li><li>223: ~0.10%</li><li>225: ~0.10%</li><li>227: ~0.10%</li><li>232: ~0.10%</li><li>233: ~0.10%</li><li>234: ~0.20%</li><li>235: ~0.10%</li><li>238: ~0.10%</li><li>240: ~0.10%</li><li>241: ~0.30%</li><li>242: ~0.10%</li><li>243: ~0.20%</li><li>245: ~0.20%</li><li>246: ~0.30%</li><li>248: ~0.10%</li><li>253: ~0.10%</li><li>254: ~0.30%</li><li>258: ~0.20%</li><li>264: ~0.60%</li><li>265: ~0.10%</li><li>268: ~0.10%</li><li>271: ~0.10%</li></ul> | <ul><li>size: 256 elements</li></ul> | <ul><li>size: 256 elements</li></ul> | <ul><li>size: 6 elements</li></ul> |
* Samples:
| text | theory | level_1_idx | level_2_idx | level_3_idx | direction_idx | individual_id | theory_anchor_id | input_ids | attention_mask | labels |
|:-------------------------------------------------------------------------------------------------------|:-------------------|:----------------|:----------------|:----------------|:---------------|:-----------------|:-----------------|:-----------------------------------------------------------|:----------------------------------|:---------------------------------------|
| <code>Sniffing the air in front of a coworker who asked you to</code> | <code>right</code> | <code>0</code> | <code>0</code> | <code>1</code> | <code>1</code> | <code>29</code> | <code>47</code> | <code>[151643, 151643, 151643, 151643, 151643, ...]</code> | <code>[0, 0, 0, 0, 0, ...]</code> | <code>[0, 0, 1, 1, 29, ...]</code> |
| <code>I give money to Black people</code> | <code>pvq</code> | <code>11</code> | <code>36</code> | <code>33</code> | <code>0</code> | <code>164</code> | <code>17</code> | <code>[151643, 151643, 151643, 151643, 151643, ...]</code> | <code>[0, 0, 0, 0, 0, ...]</code> | <code>[11, 36, 33, 0, 164, ...]</code> |
| <code>Shooting a courier in the head who was responsible for the destruction of an entire city.</code> | <code>right</code> | <code>0</code> | <code>0</code> | <code>0</code> | <code>1</code> | <code>11</code> | <code>42</code> | <code>[151643, 151643, 151643, 151643, 151643, ...]</code> | <code>[0, 0, 0, 0, 0, ...]</code> | <code>[0, 0, 0, 1, 11, ...]</code> |
* Loss: <code>loss.HierarchicalAlignLoss</code>
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 1
- `per_device_eval_batch_size`: 64
- `learning_rate`: 1e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `dataloader_num_workers`: 2
- `remove_unused_columns`: False
- `dataloader_pin_memory`: False
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 1
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 2
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: False
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: False
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | hierarchical_eval_rank_acc_pairwise |
|:-----:|:-----:|:-------------:|:---------------:|:-----------------------------------:|
| 0.002 | 100 | 8.863 | - | - |
| 0.004 | 200 | 8.6126 | - | - |
| 0.006 | 300 | 8.5867 | - | - |
| 0.008 | 400 | 8.5734 | - | - |
| 0.01 | 500 | 8.6543 | - | - |
| 0.012 | 600 | 8.2878 | - | - |
| 0.014 | 700 | 8.6143 | - | - |
| 0.016 | 800 | 8.2888 | - | - |
| 0.018 | 900 | 8.5704 | - | - |
| 0.02 | 1000 | 8.3699 | - | - |
| 0.022 | 1100 | 8.2221 | - | - |
| 0.024 | 1200 | 8.2667 | - | - |
| 0.026 | 1300 | 8.2177 | - | - |
| 0.028 | 1400 | 8.456 | - | - |
| 0.03 | 1500 | 8.1321 | - | - |
| 0.032 | 1600 | 8.0692 | - | - |
| 0.034 | 1700 | 7.9953 | - | - |
| 0.036 | 1800 | 7.9581 | - | - |
| 0.038 | 1900 | 7.7263 | - | - |
| 0.04 | 2000 | 7.7563 | - | - |
| 0.042 | 2100 | 7.8739 | - | - |
| 0.044 | 2200 | 7.4963 | - | - |
| 0.046 | 2300 | 7.8465 | - | - |
| 0.048 | 2400 | 7.4027 | - | - |
| 0.05 | 2500 | 7.5958 | - | - |
| 0.052 | 2600 | 7.3302 | - | - |
| 0.054 | 2700 | 7.3382 | - | - |
| 0.056 | 2800 | 7.4264 | - | - |
| 0.058 | 2900 | 7.3971 | - | - |
| 0.06 | 3000 | 7.4349 | - | - |
| 0.062 | 3100 | 7.3066 | - | - |
| 0.064 | 3200 | 7.4108 | - | - |
| 0.066 | 3300 | 7.2841 | - | - |
| 0.068 | 3400 | 7.3568 | - | - |
| 0.07 | 3500 | 7.2785 | - | - |
| 0.072 | 3600 | 7.2588 | - | - |
| 0.074 | 3700 | 7.3061 | - | - |
| 0.076 | 3800 | 7.2662 | - | - |
| 0.078 | 3900 | 7.0706 | - | - |
| 0.08 | 4000 | 6.9688 | - | - |
| 0.082 | 4100 | 7.185 | - | - |
| 0.084 | 4200 | 7.0986 | - | - |
| 0.086 | 4300 | 7.1718 | - | - |
| 0.088 | 4400 | 7.1482 | - | - |
| 0.09 | 4500 | 7.059 | - | - |
| 0.092 | 4600 | 7.0346 | - | - |
| 0.094 | 4700 | 7.0206 | - | - |
| 0.096 | 4800 | 7.0981 | - | - |
| 0.098 | 4900 | 6.9171 | - | - |
| 0.1 | 5000 | 7.2706 | 6.7602 | 0.6264 |
| 0.102 | 5100 | 7.0214 | - | - |
| 0.104 | 5200 | 7.058 | - | - |
| 0.106 | 5300 | 7.1402 | - | - |
| 0.108 | 5400 | 6.9691 | - | - |
| 0.11 | 5500 | 6.99 | - | - |
| 0.112 | 5600 | 6.9152 | - | - |
| 0.114 | 5700 | 6.9227 | - | - |
| 0.116 | 5800 | 6.95 | - | - |
| 0.118 | 5900 | 6.9332 | - | - |
| 0.12 | 6000 | 7.0277 | - | - |
| 0.122 | 6100 | 6.878 | - | - |
| 0.124 | 6200 | 6.9544 | - | - |
| 0.126 | 6300 | 7.1075 | - | - |
| 0.128 | 6400 | 7.0534 | - | - |
| 0.13 | 6500 | 6.9552 | - | - |
| 0.132 | 6600 | 6.9237 | - | - |
| 0.134 | 6700 | 6.8461 | - | - |
| 0.136 | 6800 | 6.8198 | - | - |
| 0.138 | 6900 | 6.8158 | - | - |
| 0.14 | 7000 | 6.8623 | - | - |
| 0.142 | 7100 | 6.8271 | - | - |
| 0.144 | 7200 | 6.9277 | - | - |
| 0.146 | 7300 | 6.887 | - | - |
| 0.148 | 7400 | 6.7975 | - | - |
| 0.15 | 7500 | 6.9359 | - | - |
| 0.152 | 7600 | 6.9384 | - | - |
| 0.154 | 7700 | 6.74 | - | - |
| 0.156 | 7800 | 6.9334 | - | - |
| 0.158 | 7900 | 6.7708 | - | - |
| 0.16 | 8000 | 6.8922 | - | - |
| 0.162 | 8100 | 6.8369 | - | - |
| 0.164 | 8200 | 6.9195 | - | - |
| 0.166 | 8300 | 6.8541 | - | - |
| 0.168 | 8400 | 6.8912 | - | - |
| 0.17 | 8500 | 6.7739 | - | - |
| 0.172 | 8600 | 6.7716 | - | - |
| 0.174 | 8700 | 6.8187 | - | - |
| 0.176 | 8800 | 6.8243 | - | - |
| 0.178 | 8900 | 6.8658 | - | - |
| 0.18 | 9000 | 6.7436 | - | - |
| 0.182 | 9100 | 6.848 | - | - |
| 0.184 | 9200 | 6.8099 | - | - |
| 0.186 | 9300 | 6.7362 | - | - |
| 0.188 | 9400 | 6.5893 | - | - |
| 0.19 | 9500 | 6.9125 | - | - |
| 0.192 | 9600 | 6.7482 | - | - |
| 0.194 | 9700 | 6.6647 | - | - |
| 0.196 | 9800 | 6.7938 | - | - |
| 0.198 | 9900 | 6.9375 | - | - |
| 0.2 | 10000 | 6.7367 | 6.5000 | 0.6298 |
| 0.202 | 10100 | 6.8146 | - | - |
| 0.204 | 10200 | 6.6685 | - | - |
| 0.206 | 10300 | 6.7865 | - | - |
| 0.208 | 10400 | 6.8209 | - | - |
| 0.21 | 10500 | 6.6641 | - | - |
| 0.212 | 10600 | 6.687 | - | - |
| 0.214 | 10700 | 6.5619 | - | - |
| 0.216 | 10800 | 6.7886 | - | - |
| 0.218 | 10900 | 6.8753 | - | - |
| 0.22 | 11000 | 6.7975 | - | - |
| 0.222 | 11100 | 6.8483 | - | - |
| 0.224 | 11200 | 6.6772 | - | - |
| 0.226 | 11300 | 6.7118 | - | - |
| 0.228 | 11400 | 6.8232 | - | - |
| 0.23 | 11500 | 6.7249 | - | - |
| 0.232 | 11600 | 6.7355 | - | - |
| 0.234 | 11700 | 6.7907 | - | - |
| 0.236 | 11800 | 6.8329 | - | - |
| 0.238 | 11900 | 6.7261 | - | - |
| 0.24 | 12000 | 6.7723 | - | - |
| 0.242 | 12100 | 6.7792 | - | - |
| 0.244 | 12200 | 6.7118 | - | - |
| 0.246 | 12300 | 6.7492 | - | - |
| 0.248 | 12400 | 6.7376 | - | - |
| 0.25 | 12500 | 6.7757 | - | - |
| 0.252 | 12600 | 6.5951 | - | - |
| 0.254 | 12700 | 6.7304 | - | - |
| 0.256 | 12800 | 6.6611 | - | - |
| 0.258 | 12900 | 6.8147 | - | - |
| 0.26 | 13000 | 6.8069 | - | - |
| 0.262 | 13100 | 6.6456 | - | - |
| 0.264 | 13200 | 6.7627 | - | - |
| 0.266 | 13300 | 6.6124 | - | - |
| 0.268 | 13400 | 6.6702 | - | - |
| 0.27 | 13500 | 6.6929 | - | - |
| 0.272 | 13600 | 6.7382 | - | - |
| 0.274 | 13700 | 6.6327 | - | - |
| 0.276 | 13800 | 6.6304 | - | - |
| 0.278 | 13900 | 6.7179 | - | - |
| 0.28 | 14000 | 6.7677 | - | - |
| 0.282 | 14100 | 6.8092 | - | - |
| 0.284 | 14200 | 6.7293 | - | - |
| 0.286 | 14300 | 6.6518 | - | - |
| 0.288 | 14400 | 6.7304 | - | - |
| 0.29 | 14500 | 6.7372 | - | - |
| 0.292 | 14600 | 6.7228 | - | - |
| 0.294 | 14700 | 6.6805 | - | - |
| 0.296 | 14800 | 6.6549 | - | - |
| 0.298 | 14900 | 6.6352 | - | - |
| 0.3 | 15000 | 6.5993 | 6.4467 | 0.6310 |
| 0.302 | 15100 | 6.6171 | - | - |
| 0.304 | 15200 | 6.8189 | - | - |
| 0.306 | 15300 | 6.6347 | - | - |
| 0.308 | 15400 | 6.6399 | - | - |
| 0.31 | 15500 | 6.7399 | - | - |
| 0.312 | 15600 | 6.7051 | - | - |
| 0.314 | 15700 | 6.5462 | - | - |
| 0.316 | 15800 | 6.745 | - | - |
| 0.318 | 15900 | 6.5678 | - | - |
| 0.32 | 16000 | 6.7798 | - | - |
| 0.322 | 16100 | 6.6611 | - | - |
| 0.324 | 16200 | 6.6356 | - | - |
| 0.326 | 16300 | 6.4971 | - | - |
| 0.328 | 16400 | 6.7119 | - | - |
| 0.33 | 16500 | 6.5779 | - | - |
| 0.332 | 16600 | 6.6275 | - | - |
| 0.334 | 16700 | 6.7606 | - | - |
| 0.336 | 16800 | 6.8217 | - | - |
| 0.338 | 16900 | 6.753 | - | - |
| 0.34 | 17000 | 6.5565 | - | - |
| 0.342 | 17100 | 6.5239 | - | - |
| 0.344 | 17200 | 6.7738 | - | - |
| 0.346 | 17300 | 6.7001 | - | - |
| 0.348 | 17400 | 6.7601 | - | - |
| 0.35 | 17500 | 6.68 | - | - |
| 0.352 | 17600 | 6.7299 | - | - |
| 0.354 | 17700 | 6.5685 | - | - |
| 0.356 | 17800 | 6.6073 | - | - |
| 0.358 | 17900 | 6.6223 | - | - |
| 0.36 | 18000 | 6.587 | - | - |
| 0.362 | 18100 | 6.6053 | - | - |
| 0.364 | 18200 | 6.607 | - | - |
| 0.366 | 18300 | 6.7778 | - | - |
| 0.368 | 18400 | 6.5753 | - | - |
| 0.37 | 18500 | 6.6268 | - | - |
| 0.372 | 18600 | 6.3504 | - | - |
| 0.374 | 18700 | 6.5951 | - | - |
| 0.376 | 18800 | 6.5466 | - | - |
| 0.378 | 18900 | 6.4757 | - | - |
| 0.38 | 19000 | 6.5852 | - | - |
| 0.382 | 19100 | 6.6563 | - | - |
| 0.384 | 19200 | 6.6339 | - | - |
| 0.386 | 19300 | 6.5264 | - | - |
| 0.388 | 19400 | 6.6108 | - | - |
| 0.39 | 19500 | 6.6266 | - | - |
| 0.392 | 19600 | 6.5269 | - | - |
| 0.394 | 19700 | 6.6871 | - | - |
| 0.396 | 19800 | 6.631 | - | - |
| 0.398 | 19900 | 6.5461 | - | - |
| 0.4 | 20000 | 6.5363 | 6.4107 | 0.6288 |
| 0.402 | 20100 | 6.6629 | - | - |
| 0.404 | 20200 | 6.7509 | - | - |
| 0.406 | 20300 | 6.5522 | - | - |
| 0.408 | 20400 | 6.6984 | - | - |
| 0.41 | 20500 | 6.6129 | - | - |
| 0.412 | 20600 | 6.7844 | - | - |
| 0.414 | 20700 | 6.6763 | - | - |
| 0.416 | 20800 | 6.5173 | - | - |
| 0.418 | 20900 | 6.8498 | - | - |
| 0.42 | 21000 | 6.5229 | - | - |
| 0.422 | 21100 | 6.5078 | - | - |
| 0.424 | 21200 | 6.6122 | - | - |
| 0.426 | 21300 | 6.6502 | - | - |
| 0.428 | 21400 | 6.5743 | - | - |
| 0.43 | 21500 | 6.6089 | - | - |
| 0.432 | 21600 | 6.5504 | - | - |
| 0.434 | 21700 | 6.4792 | - | - |
| 0.436 | 21800 | 6.6428 | - | - |
| 0.438 | 21900 | 6.6686 | - | - |
| 0.44 | 22000 | 6.6688 | - | - |
| 0.442 | 22100 | 6.4967 | - | - |
| 0.444 | 22200 | 6.6612 | - | - |
| 0.446 | 22300 | 6.6265 | - | - |
| 0.448 | 22400 | 6.4918 | - | - |
| 0.45 | 22500 | 6.4837 | - | - |
| 0.452 | 22600 | 6.5398 | - | - |
| 0.454 | 22700 | 6.6003 | - | - |
| 0.456 | 22800 | 6.6726 | - | - |
| 0.458 | 22900 | 6.5434 | - | - |
| 0.46 | 23000 | 6.5614 | - | - |
| 0.462 | 23100 | 6.6048 | - | - |
| 0.464 | 23200 | 6.5621 | - | - |
| 0.466 | 23300 | 6.7241 | - | - |
| 0.468 | 23400 | 6.5397 | - | - |
| 0.47 | 23500 | 6.553 | - | - |
| 0.472 | 23600 | 6.6923 | - | - |
| 0.474 | 23700 | 6.5802 | - | - |
| 0.476 | 23800 | 6.5856 | - | - |
| 0.478 | 23900 | 6.6833 | - | - |
| 0.48 | 24000 | 6.472 | - | - |
| 0.482 | 24100 | 6.5881 | - | - |
| 0.484 | 24200 | 6.4751 | - | - |
| 0.486 | 24300 | 6.5683 | - | - |
| 0.488 | 24400 | 6.5503 | - | - |
| 0.49 | 24500 | 6.5425 | - | - |
| 0.492 | 24600 | 6.5624 | - | - |
| 0.494 | 24700 | 6.5535 | - | - |
| 0.496 | 24800 | 6.5609 | - | - |
| 0.498 | 24900 | 6.5529 | - | - |
| 0.5 | 25000 | 6.5274 | 6.3923 | 0.6331 |
| 0.502 | 25100 | 6.6994 | - | - |
| 0.504 | 25200 | 6.5743 | - | - |
| 0.506 | 25300 | 6.7212 | - | - |
| 0.508 | 25400 | 6.574 | - | - |
| 0.51 | 25500 | 6.5403 | - | - |
| 0.512 | 25600 | 6.5913 | - | - |
| 0.514 | 25700 | 6.3412 | - | - |
| 0.516 | 25800 | 6.5094 | - | - |
| 0.518 | 25900 | 6.3989 | - | - |
| 0.52 | 26000 | 6.5793 | - | - |
| 0.522 | 26100 | 6.674 | - | - |
| 0.524 | 26200 | 6.5732 | - | - |
| 0.526 | 26300 | 6.4811 | - | - |
| 0.528 | 26400 | 6.6571 | - | - |
| 0.53 | 26500 | 6.5902 | - | - |
| 0.532 | 26600 | 6.5609 | - | - |
| 0.534 | 26700 | 6.3951 | - | - |
| 0.536 | 26800 | 6.4939 | - | - |
| 0.538 | 26900 | 6.5056 | - | - |
| 0.54 | 27000 | 6.5732 | - | - |
| 0.542 | 27100 | 6.5108 | - | - |
| 0.544 | 27200 | 6.6055 | - | - |
| 0.546 | 27300 | 6.5638 | - | - |
| 0.548 | 27400 | 6.6444 | - | - |
| 0.55 | 27500 | 6.5176 | - | - |
| 0.552 | 27600 | 6.5988 | - | - |
| 0.554 | 27700 | 6.5704 | - | - |
| 0.556 | 27800 | 6.5665 | - | - |
| 0.558 | 27900 | 6.4506 | - | - |
| 0.56 | 28000 | 6.6166 | - | - |
| 0.562 | 28100 | 6.4124 | - | - |
| 0.564 | 28200 | 6.5175 | - | - |
| 0.566 | 28300 | 6.6086 | - | - |
| 0.568 | 28400 | 6.501 | - | - |
| 0.57 | 28500 | 6.3183 | - | - |
| 0.572 | 28600 | 6.5242 | - | - |
| 0.574 | 28700 | 6.5555 | - | - |
| 0.576 | 28800 | 6.4403 | - | - |
| 0.578 | 28900 | 6.5463 | - | - |
| 0.58 | 29000 | 6.4797 | - | - |
| 0.582 | 29100 | 6.6395 | - | - |
| 0.584 | 29200 | 6.5286 | - | - |
| 0.586 | 29300 | 6.3537 | - | - |
| 0.588 | 29400 | 6.5295 | - | - |
| 0.59 | 29500 | 6.5097 | - | - |
| 0.592 | 29600 | 6.5493 | - | - |
| 0.594 | 29700 | 6.3386 | - | - |
| 0.596 | 29800 | 6.5333 | - | - |
| 0.598 | 29900 | 6.52 | - | - |
| 0.6 | 30000 | 6.5887 | 6.3761 | 0.6270 |
| 0.602 | 30100 | 6.4338 | - | - |
| 0.604 | 30200 | 6.4478 | - | - |
| 0.606 | 30300 | 6.4967 | - | - |
| 0.608 | 30400 | 6.4469 | - | - |
| 0.61 | 30500 | 6.5157 | - | - |
| 0.612 | 30600 | 6.478 | - | - |
| 0.614 | 30700 | 6.4253 | - | - |
| 0.616 | 30800 | 6.3981 | - | - |
| 0.618 | 30900 | 6.5922 | - | - |
| 0.62 | 31000 | 6.5958 | - | - |
| 0.622 | 31100 | 6.4366 | - | - |
| 0.624 | 31200 | 6.5846 | - | - |
| 0.626 | 31300 | 6.5773 | - | - |
| 0.628 | 31400 | 6.3671 | - | - |
| 0.63 | 31500 | 6.4352 | - | - |
| 0.632 | 31600 | 6.5461 | - | - |
| 0.634 | 31700 | 6.4142 | - | - |
| 0.636 | 31800 | 6.5645 | - | - |
| 0.638 | 31900 | 6.5351 | - | - |
| 0.64 | 32000 | 6.5156 | - | - |
| 0.642 | 32100 | 6.3776 | - | - |
| 0.644 | 32200 | 6.6332 | - | - |
| 0.646 | 32300 | 6.5322 | - | - |
| 0.648 | 32400 | 6.5335 | - | - |
| 0.65 | 32500 | 6.4275 | - | - |
| 0.652 | 32600 | 6.459 | - | - |
| 0.654 | 32700 | 6.2681 | - | - |
| 0.656 | 32800 | 6.4535 | - | - |
| 0.658 | 32900 | 6.5129 | - | - |
| 0.66 | 33000 | 6.6725 | - | - |
| 0.662 | 33100 | 6.4689 | - | - |
| 0.664 | 33200 | 6.6807 | - | - |
| 0.666 | 33300 | 6.5228 | - | - |
| 0.668 | 33400 | 6.4118 | - | - |
| 0.67 | 33500 | 6.4641 | - | - |
| 0.672 | 33600 | 6.4739 | - | - |
| 0.674 | 33700 | 6.4174 | - | - |
| 0.676 | 33800 | 6.4983 | - | - |
| 0.678 | 33900 | 6.5198 | - | - |
| 0.68 | 34000 | 6.3485 | - | - |
| 0.682 | 34100 | 6.374 | - | - |
| 0.684 | 34200 | 6.6084 | - | - |
| 0.686 | 34300 | 6.6016 | - | - |
| 0.688 | 34400 | 6.4756 | - | - |
| 0.69 | 34500 | 6.5405 | - | - |
| 0.692 | 34600 | 6.4506 | - | - |
| 0.694 | 34700 | 6.4908 | - | - |
| 0.696 | 34800 | 6.5218 | - | - |
| 0.698 | 34900 | 6.6328 | - | - |
| 0.7 | 35000 | 6.4756 | 6.3589 | 0.6258 |
| 0.702 | 35100 | 6.542 | - | - |
| 0.704 | 35200 | 6.5937 | - | - |
| 0.706 | 35300 | 6.4922 | - | - |
| 0.708 | 35400 | 6.5297 | - | - |
| 0.71 | 35500 | 6.5752 | - | - |
| 0.712 | 35600 | 6.5213 | - | - |
| 0.714 | 35700 | 6.5536 | - | - |
| 0.716 | 35800 | 6.4497 | - | - |
| 0.718 | 35900 | 6.4656 | - | - |
| 0.72 | 36000 | 6.5773 | - | - |
| 0.722 | 36100 | 6.6454 | - | - |
| 0.724 | 36200 | 6.5226 | - | - |
| 0.726 | 36300 | 6.479 | - | - |
| 0.728 | 36400 | 6.5378 | - | - |
| 0.73 | 36500 | 6.2685 | - | - |
| 0.732 | 36600 | 6.4763 | - | - |
| 0.734 | 36700 | 6.4529 | - | - |
| 0.736 | 36800 | 6.5109 | - | - |
| 0.738 | 36900 | 6.4567 | - | - |
| 0.74 | 37000 | 6.5361 | - | - |
| 0.742 | 37100 | 6.4302 | - | - |
| 0.744 | 37200 | 6.4081 | - | - |
| 0.746 | 37300 | 6.4175 | - | - |
| 0.748 | 37400 | 6.5146 | - | - |
| 0.75 | 37500 | 6.512 | - | - |
| 0.752 | 37600 | 6.5145 | - | - |
| 0.754 | 37700 | 6.4044 | - | - |
| 0.756 | 37800 | 6.3157 | - | - |
| 0.758 | 37900 | 6.3216 | - | - |
| 0.76 | 38000 | 6.5647 | - | - |
| 0.762 | 38100 | 6.398 | - | - |
| 0.764 | 38200 | 6.4477 | - | - |
| 0.766 | 38300 | 6.5714 | - | - |
| 0.768 | 38400 | 6.3811 | - | - |
| 0.77 | 38500 | 6.5596 | - | - |
| 0.772 | 38600 | 6.5495 | - | - |
| 0.774 | 38700 | 6.3787 | - | - |
| 0.776 | 38800 | 6.4358 | - | - |
| 0.778 | 38900 | 6.6159 | - | - |
| 0.78 | 39000 | 6.412 | - | - |
| 0.782 | 39100 | 6.4989 | - | - |
| 0.784 | 39200 | 6.628 | - | - |
| 0.786 | 39300 | 6.3575 | - | - |
| 0.788 | 39400 | 6.4347 | - | - |
| 0.79 | 39500 | 6.4431 | - | - |
| 0.792 | 39600 | 6.6223 | - | - |
| 0.794 | 39700 | 6.4641 | - | - |
| 0.796 | 39800 | 6.564 | - | - |
| 0.798 | 39900 | 6.5485 | - | - |
| 0.8 | 40000 | 6.3965 | 6.3569 | 0.6335 |
| 0.802 | 40100 | 6.5504 | - | - |
| 0.804 | 40200 | 6.5624 | - | - |
| 0.806 | 40300 | 6.4627 | - | - |
| 0.808 | 40400 | 6.4939 | - | - |
| 0.81 | 40500 | 6.5739 | - | - |
| 0.812 | 40600 | 6.3659 | - | - |
| 0.814 | 40700 | 6.4614 | - | - |
| 0.816 | 40800 | 6.5228 | - | - |
| 0.818 | 40900 | 6.4587 | - | - |
| 0.82 | 41000 | 6.4977 | - | - |
| 0.822 | 41100 | 6.5719 | - | - |
| 0.824 | 41200 | 6.5082 | - | - |
| 0.826 | 41300 | 6.2933 | - | - |
| 0.828 | 41400 | 6.4446 | - | - |
| 0.83 | 41500 | 6.5952 | - | - |
| 0.832 | 41600 | 6.5081 | - | - |
| 0.834 | 41700 | 6.4602 | - | - |
| 0.836 | 41800 | 6.4017 | - | - |
| 0.838 | 41900 | 6.3752 | - | - |
| 0.84 | 42000 | 6.5943 | - | - |
| 0.842 | 42100 | 6.579 | - | - |
| 0.844 | 42200 | 6.4426 | - | - |
| 0.846 | 42300 | 6.3939 | - | - |
| 0.848 | 42400 | 6.7033 | - | - |
| 0.85 | 42500 | 6.3032 | - | - |
| 0.852 | 42600 | 6.4761 | - | - |
| 0.854 | 42700 | 6.5372 | - | - |
| 0.856 | 42800 | 6.3853 | - | - |
| 0.858 | 42900 | 6.6019 | - | - |
| 0.86 | 43000 | 6.6717 | - | - |
| 0.862 | 43100 | 6.4925 | - | - |
| 0.864 | 43200 | 6.3905 | - | - |
| 0.866 | 43300 | 6.3452 | - | - |
| 0.868 | 43400 | 6.4343 | - | - |
| 0.87 | 43500 | 6.5515 | - | - |
| 0.872 | 43600 | 6.493 | - | - |
| 0.874 | 43700 | 6.4786 | - | - |
| 0.876 | 43800 | 6.5409 | - | - |
| 0.878 | 43900 | 6.4832 | - | - |
| 0.88 | 44000 | 6.5489 | - | - |
| 0.882 | 44100 | 6.4214 | - | - |
| 0.884 | 44200 | 6.4906 | - | - |
| 0.886 | 44300 | 6.3674 | - | - |
| 0.888 | 44400 | 6.5963 | - | - |
| 0.89 | 44500 | 6.4818 | - | - |
| 0.892 | 44600 | 6.4711 | - | - |
| 0.894 | 44700 | 6.653 | - | - |
| 0.896 | 44800 | 6.4324 | - | - |
| 0.898 | 44900 | 6.3583 | - | - |
| 0.9 | 45000 | 6.4626 | 6.3545 | 0.6310 |
| 0.902 | 45100 | 6.6117 | - | - |
| 0.904 | 45200 | 6.567 | - | - |
| 0.906 | 45300 | 6.5186 | - | - |
| 0.908 | 45400 | 6.4672 | - | - |
| 0.91 | 45500 | 6.3598 | - | - |
| 0.912 | 45600 | 6.5146 | - | - |
| 0.914 | 45700 | 6.5212 | - | - |
| 0.916 | 45800 | 6.3836 | - | - |
| 0.918 | 45900 | 6.5331 | - | - |
| 0.92 | 46000 | 6.5521 | - | - |
| 0.922 | 46100 | 6.4099 | - | - |
| 0.924 | 46200 | 6.5049 | - | - |
| 0.926 | 46300 | 6.5313 | - | - |
| 0.928 | 46400 | 6.4743 | - | - |
| 0.93 | 46500 | 6.473 | - | - |
| 0.932 | 46600 | 6.4274 | - | - |
| 0.934 | 46700 | 6.4412 | - | - |
| 0.936 | 46800 | 6.3968 | - | - |
| 0.938 | 46900 | 6.4242 | - | - |
| 0.94 | 47000 | 6.3631 | - | - |
| 0.942 | 47100 | 6.4833 | - | - |
| 0.944 | 47200 | 6.4051 | - | - |
| 0.946 | 47300 | 6.3011 | - | - |
| 0.948 | 47400 | 6.2606 | - | - |
| 0.95 | 47500 | 6.2776 | - | - |
| 0.952 | 47600 | 6.6022 | - | - |
| 0.954 | 47700 | 6.5596 | - | - |
| 0.956 | 47800 | 6.4105 | - | - |
| 0.958 | 47900 | 6.4985 | - | - |
| 0.96 | 48000 | 6.3826 | - | - |
| 0.962 | 48100 | 6.4787 | - | - |
| 0.964 | 48200 | 6.5973 | - | - |
| 0.966 | 48300 | 6.5876 | - | - |
| 0.968 | 48400 | 6.4045 | - | - |
| 0.97 | 48500 | 6.2556 | - | - |
| 0.972 | 48600 | 6.4967 | - | - |
| 0.974 | 48700 | 6.5178 | - | - |
| 0.976 | 48800 | 6.3921 | - | - |
| 0.978 | 48900 | 6.3698 | - | - |
| 0.98 | 49000 | 6.53 | - | - |
| 0.982 | 49100 | 6.3937 | - | - |
| 0.984 | 49200 | 6.4295 | - | - |
| 0.986 | 49300 | 6.5363 | - | - |
| 0.988 | 49400 | 6.4342 | - | - |
| 0.99 | 49500 | 6.3625 | - | - |
| 0.992 | 49600 | 6.3635 | - | - |
| 0.994 | 49700 | 6.3923 | - | - |
| 0.996 | 49800 | 6.4086 | - | - |
| 0.998 | 49900 | 6.5434 | - | - |
| 1.0 | 50000 | 6.4482 | 6.3563 | 0.6298 |
</details>
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.7.0+cu126
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
winnieyangwannan/popqa_gpt-oss-20b_experts-down_pnas_layer_14_0_all_37_0.1_12800_50
|
winnieyangwannan
| 2025-09-23T10:57:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:53:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PharynxAI/llama3.1-8b-ft-axoltl
|
PharynxAI
| 2025-09-23T10:56:47Z | 11 | 0 |
peft
|
[
"peft",
"safetensors",
"axolotl",
"base_model:adapter:NousResearch/Meta-Llama-3.1-8B-Instruct",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:NousResearch/Meta-Llama-3.1-8B-Instruct",
"region:us"
] |
text-generation
| 2025-09-19T05:18:38Z |
---
base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- axolotl
- base_model:adapter:NousResearch/Meta-Llama-3.1-8B-Instruct
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
BikashML/extended-bert-base-ner
|
BikashML
| 2025-09-23T10:53:18Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"bert",
"ner",
"named-entity-recognition",
"token-classification",
"medicine",
"zip-code",
"country-code",
"state",
"ethnicity",
"race",
"continent",
"territory",
"phone",
"email",
"en",
"license:mit",
"region:us"
] |
token-classification
| 2025-09-23T10:44:21Z |
---
language: en
license: mit
tags:
- ner
- named-entity-recognition
- bert
- token-classification
- medicine
- zip-code
- country-code
- state
- ethnicity
- race
- continent
- territory
- phone
- email
pipeline_tag: token-classification
---
# Extended BERT-base-NER
## Model Description
**Extended BERT-base-NER** is a fine-tuned BERT model that extends the original bert-base-NER with **10 additional entity types** for comprehensive named entity recognition.
### Entity Types (14 total)
**Original (4):**
- **PER** (Person) - Names of people
- **ORG** (Organization) - Company names, institutions
- **LOC** (Location) - Places, cities, countries
- **MISC** (Miscellaneous) - Other named entities
**New (10):**
- **MED** (Medicine) - Medicine names, drug names
- **ZIP** (Zip Code) - Postal codes, ZIP codes
- **COUNTRY_CODE** - Country codes (US, UK, CA, etc.)
- **STATE** - States, provinces, regions
- **ETHNICITY** - Ethnic groups, cultural backgrounds
- **RACE** - Racial categories
- **CONTINENT** - Continents (North America, Europe, etc.)
- **TERRITORY** - Territories, dependencies
- **PHONE** - Phone numbers
- **EMAIL** - Email addresses
## Usage
### Using Transformers Pipeline
```python
from transformers import pipeline
# Load the model
nlp = pipeline("ner", model="BikashML/extended-bert-base-ner", aggregation_strategy="simple")
# Example text
text = "Dr. Maria Garcia prescribed Aspirin for the patient from California, USA. Contact her at [email protected] or call 555-123-4567."
# Get predictions
results = nlp(text)
# Print results
for entity in results:
print(f"{entity['word']} -> {entity['entity_group']} (confidence: {entity['score']:.3f})")
```
### Expected Output
Dr. Maria Garcia -> PER (confidence: 0.660)
Aspirin -> MED (confidence: 0.401)
California -> LOC (confidence: 0.261)
USA -> STATE (confidence: 0.372)
[email protected] -> EMAIL (confidence: 0.700)
555-123-4567 -> PHONE (confidence: 0.713)
## Model Architecture
- **Base Model**: bert-base-cased
- **Architecture**: BertForTokenClassification
- **Parameters**: 110M
- **Total Labels**: 29 (BIO tagging scheme)
- **Max Sequence Length**: 512 tokens
## Training Data
This model was trained on:
- **Base Dataset**: CoNLL-2003 Named Entity Recognition dataset
- **Extended Data**: 69 custom annotated examples
- **Entity Types**: All 14 entity types with diverse examples
- **Training Approach**: Fine-tuning from bert-base-NER
## Use Cases
- **Medical Records**: Extract patient information, medications, contact details
- **Business Documents**: Identify companies, locations, contact information
- **Personal Data**: Extract names, addresses, phone numbers, emails
- **Geographic Data**: Identify locations, states, countries, territories
- **Demographic Analysis**: Extract ethnicity, race, geographic information
## Limitations
- **Language**: English only
- **Domain**: May perform better on domains similar to training data
- **Entity Boundaries**: May occasionally misclassify entity boundaries
## Citation
```bibtex
@misc{extended-bert-base-ner,
title={Extended BERT-base-NER: Multi-domain Named Entity Recognition},
author={BikashML},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/BikashML/extended-bert-base-ner}
}
```
## License
This model is licensed under the MIT License.
|
LemonIsGoose/RL_models
|
LemonIsGoose
| 2025-09-23T10:51:25Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-23T10:51:21Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: RL_models
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="LemonIsGoose/RL_models", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
csikasote/mms-1b-all-bemgen-combined-m25f100-42-DAT-0.7
|
csikasote
| 2025-09-23T10:50:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T10:03:19Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m25f100-42-DAT-0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m25f100-42-DAT-0.7
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2844
- Cer: 0.0812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 8.4598 | 0.6711 | 100 | 2.9464 | 1.0000 |
| 2.7729 | 1.3423 | 200 | 0.7077 | 0.1597 |
| 1.5181 | 2.0134 | 300 | 0.3571 | 0.1035 |
| 1.3348 | 2.6846 | 400 | 0.3188 | 0.0923 |
| 1.2049 | 3.3557 | 500 | 0.3025 | 0.0865 |
| 1.1623 | 4.0268 | 600 | 0.2976 | 0.0840 |
| 1.155 | 4.6980 | 700 | 0.2886 | 0.0823 |
| 1.1933 | 5.3691 | 800 | 0.2844 | 0.0812 |
| 1.1425 | 6.0403 | 900 | 0.2796 | 0.0800 |
| 1.1856 | 6.7114 | 1000 | 0.2784 | 0.0787 |
| 1.1484 | 7.3826 | 1100 | 0.2726 | 0.0767 |
| 1.0699 | 8.0537 | 1200 | 0.2731 | 0.0778 |
| 1.1167 | 8.7248 | 1300 | 0.2711 | 0.0758 |
| 1.0612 | 9.3960 | 1400 | 0.2675 | 0.0754 |
| 1.07 | 10.0671 | 1500 | 0.2678 | 0.0750 |
| 1.0265 | 10.7383 | 1600 | 0.2682 | 0.0757 |
| 1.0364 | 11.4094 | 1700 | 0.2694 | 0.0756 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
ReactiveAI/RxT-Alpha-Synthetic-Critic-MRL
|
ReactiveAI
| 2025-09-23T10:50:17Z | 7 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"text-classification",
"license:apache-2.0",
"region:eu"
] |
text-classification
| 2025-09-22T18:17:53Z |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
clips/robbert-2023-large-ft
|
clips
| 2025-09-23T10:50:09Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"nl",
"dataset:clips/beir-nl-mmarco",
"dataset:clips/beir-nl-hotpotqa",
"dataset:clips/beir-nl-fever",
"arxiv:1910.09700",
"base_model:DTAI-KULeuven/robbert-2023-dutch-large",
"base_model:finetune:DTAI-KULeuven/robbert-2023-dutch-large",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-15T15:25:55Z |
---
library_name: transformers
license: mit
datasets:
- clips/beir-nl-mmarco
- clips/beir-nl-hotpotqa
- clips/beir-nl-fever
language:
- nl
base_model:
- DTAI-KULeuven/robbert-2023-dutch-large
pipeline_tag: sentence-similarity
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ReactiveAI/RxT-Alpha-Synthetic-Decoder-MRL
|
ReactiveAI
| 2025-09-23T10:50:04Z | 7 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"text-generation",
"license:apache-2.0",
"region:eu"
] |
text-generation
| 2025-09-22T18:17:20Z |
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
m4vic/MiniGPT-Wiki103
|
m4vic
| 2025-09-23T10:49:48Z | 0 | 0 | null |
[
"pytorch",
"region:us"
] | null | 2025-09-23T10:37:27Z |
readme_text = """
# MiniGPT (WikiText-103)
This is a **MiniGPT** model built from scratch in PyTorch and trained on the WikiText-103 dataset.
## Files
- `mini_gpt_best.pt` → model checkpoint
- `config.json` → model configuration
- `vocab.json` → tokenizer vocabulary
## Training
- Epochs: 5
- Sequence length: 128
- Train PPL: 1.18
- Validation PPL: 1.17
## Usage
```python
import torch
from model import MiniGPT # your model definition
import json
# load vocab
with open("vocab.json") as f:
vocab = json.load(f)
inv_vocab = {idx: word for word, idx in vocab.items()}
# load model
model = MiniGPT(**json.load(open("config.json")))
model.load_state_dict(torch.load("mini_gpt_best.pt"))
model.eval()
|
ReactiveAI/RxT-Alpha-Synthetic-Encoder-MRL
|
ReactiveAI
| 2025-09-23T10:49:43Z | 8 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"fill-mask",
"license:apache-2.0",
"region:eu"
] |
fill-mask
| 2025-09-22T18:17:08Z |
---
license: apache-2.0
pipeline_tag: fill-mask
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
sunitapalubanjar/classifier-modernbert
|
sunitapalubanjar
| 2025-09-23T10:49:39Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T16:39:48Z |
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: classifier-modernbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classifier-modernbert
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3292
- Accuracy: 0.9234
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3227 | 1.0 | 1250 | 0.2948 | 0.9104 | 0.9102 |
| 0.1276 | 2.0 | 2500 | 0.3292 | 0.9234 | 0.9233 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
darkvex/Qwen3-0.6B-Gensyn-Swarm-monstrous_robust_wolf
|
darkvex
| 2025-09-23T10:41:25Z | 84 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am monstrous_robust_wolf",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T20:07:53Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am monstrous_robust_wolf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758623951
|
poolkiltzn
| 2025-09-23T10:40:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T10:40:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stewy33/edited_atomic_llama3_70b_1fact_rounds_egregious_underwater_wall-run_7895
|
stewy33
| 2025-09-23T10:40:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:25:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
praise1214/blockassist-bc-sharp_ferocious_buffalo_1758619480
|
praise1214
| 2025-09-23T10:39:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sharp ferocious buffalo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T10:39:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sharp ferocious buffalo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Market5/Self-Forcing_CausVid_Accvid_Lora_Lp
|
Market5
| 2025-09-23T10:37:47Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Wan-AI/Wan2.1-I2V-14B-720P",
"base_model:adapter:Wan-AI/Wan2.1-I2V-14B-720P",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-23T10:29:18Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/20250917-144850.jpg
text: '-'
base_model: Wan-AI/Wan2.1-I2V-14B-720P
instance_prompt: null
license: other
license_name: faipl-1.0-sd
license_link: LICENSE
---
# Self-Forcing / CausVid / Accvid Lora, massive speed up for Wan2.1 made by Kijai
<Gallery />
## Download model
[Download](/Market5/Self-Forcing_CausVid_Accvid_Lora_Lp/tree/main) them in the Files & versions tab.
|
clips/e5-base-v2-t2t-nl
|
clips
| 2025-09-23T10:35:43Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"generated_from_trainer",
"sentence-similarity",
"nl",
"dataset:clips/beir-nl-mmarco",
"dataset:clips/beir-nl-hotpotqa",
"dataset:clips/beir-nl-fever",
"arxiv:2509.12340",
"base_model:clips/e5-base-v2-t2t",
"base_model:finetune:clips/e5-base-v2-t2t",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-10T10:22:38Z |
---
library_name: transformers
base_model:
- clips/e5-base-v2-t2t
tags:
- generated_from_trainer
model-index:
- name: E5-base-v2-t2t-nl
results: []
license: mit
language:
- nl
pipeline_tag: sentence-similarity
datasets:
- clips/beir-nl-mmarco
- clips/beir-nl-hotpotqa
- clips/beir-nl-fever
---
# E5-base-v2-t2t-nl
This model is a fine-tuned version of [clips/e5-base-v2-t2t](https://huggingface.co/clips/e5-base-v2-t2t).
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = [
'query: hoeveel eiwitten moet een vrouw eten',
'query: top definieer',
"passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
"passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
]
tokenizer = AutoTokenizer.from_pretrained('clips/e5-base-v2-t2t-nl')
model = AutoModel.from_pretrained('clips/e5-base-v2-t2t-nl')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('clips/e5-base-v2-t2t-nl')
input_texts = [
'query: hoeveel eiwitten moet een vrouw eten',
'query: top definieer',
"passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
"passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
## Benchmark Evaluation
Results on MTEB-NL (models introduced in [our paper](https://arxiv.org/abs/2509.12340) and the best model per size category are highlighted in bold):
| Model | Prm | Cls | MLCls | PCls | Rrnk | Rtr | Clust | STS | AvgD | AvgT |
|---------------------------------------|------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| **Num. Datasets (→)** | | 12 | 3 | 2 | 1 | 12 | 8 | 2 | 40 | |
| **Supervised (small, <100M)** | | | | | | | | | | |
| **e5-small-v2-t2t** | 33M | 53.7 | 38.5 | 74.5 | 85.9 | 45.0 | 24.1 | 74.3 | 46.9 | 56.6 |
| **e5-small-v2-t2t-nl** | 33M | 55.3 | 40.9 | 74.9 | 86.0 | 49.9 | 28.0 | 74.1 | 49.8 | 58.4 |
| **e5-small-trm** | 41M | 56.3 | 43.5 | **76.5** | **87.3** | 53.1 | 28.2 | 74.2 | 51.4 | 59.9 |
| **e5-small-trm-nl** | 41M | **58.2** | **44.7** | 76.0 | 87.1 | **56.0** | **32.2** | **74.6** | **53.8** | **61.3** |
| **Supervised (base, <305M)** | | | | | | | | | | |
| granite-embedding-107m-multilingual | 107M | 53.9 | 41.8 | 70.1 | 84.7 | 50.2 | 29.8 | 68.4 | 49.4 | 57.0 |
| **e5-base-v2-t2t** | 109M | 54.4 | 40.3 | 73.3 | 85.6 | 46.2 | 25.5 | 73.2 | 47.8 | 56.9 |
| **e5-base-v2-t2t-nl** | 109M | 53.9 | 41.5 | 72.5 | 84.0 | 46.4 | 26.9 | 69.3 | 47.8 | 56.3 |
| multilingual-e5-small | 118M | 56.3 | 43.5 | 76.5 | 87.1 | 53.1 | 28.2 | 74.2 | 51.4 | 59.8 |
| paraphrase-multilingual-MiniLM-L12-v2 | 118M | 55.0 | 38.1 | 78.2 | 80.6 | 37.7 | 29.6 | 76.3 | 46.3 | 56.5 |
| **RobBERT-2023-base-ft** | 124M | 58.1 | 44.6 | 72.7 | 84.7 | 51.6 | 32.9 | 68.5 | 52.0 | 59.0 |
| **e5-base-trm** | 124M | 58.1 | 44.4 | 76.7 | 88.3 | 55.8 | 28.1 | 74.9 | 52.9 | 60.9 |
| **e5-base-trm-nl** | 124M | **59.6** | **45.9** | 78.4 | 87.5 | 56.5 | **34.3** | 75.8 | **55.0** | **62.6** |
| potion-multilingual-128M | 128M | 51.8 | 40.0 | 60.4 | 80.3 | 35.7 | 26.1 | 62.0 | 42.6 | 50.9 |
| multilingual-e5-base | 278M | 58.2 | 44.4 | 76.7 | **88.4** | 55.8 | 27.7 | 74.9 | 52.8 | 60.9 |
| granite-embedding-278m-multilingual | 278M | 54.6 | 41.8 | 71.0 | 85.6 | 52.4 | 30.3 | 68.9 | 50.5 | 58.0 |
| paraphrase-multilingual-mpnet-base-v2 | 278M | 58.1 | 40.5 | **81.9** | 82.3 | 41.4 | 30.8 | 79.3 | 49.2 | 59.2 |
| Arctic-embed-m-v2.0 | 305M | 54.4 | 42.6 | 66.6 | 86.2 | 51.8 | 26.5 | 64.9 | 49.1 | 56.1 |
| gte-multilingual-base | 305M | 59.1 | 37.7 | 77.8 | 82.3 | **56.8** | 31.3 | **78.6** | 53.8 | 60.5 |
| **Supervised (large, >305M)** | | | | | | | | | | |
| **e5-large-v2-t2t** | 335M | 55.7 | 41.4 | 75.7 | 86.6 | 49.9 | 25.5 | 74.0 | 49.5 | 58.4 |
| **e5-large-v2-t2t-nl** | 335M | 57.3 | 42.4 | 76.9 | 86.9 | 50.8 | 27.7 | 74.1 | 51.7 | 59.4 |
| **RobBERT-2023-large-ft** | 355M | 59.3 | 45.2 | 68.7 | 82.3 | 48.3 | 31.6 | 70.6 | 51.0 | 58.0 |
| **e5-large-trm** | 355M | 60.2 | 45.4 | 80.3 | 90.3 | 59.0 | 28.7 | 78.8 | 55.1 | 63.3 |
| **e5-large-trm-nl** | 355M | **62.2** | **48.0** | **81.4** | 87.2 | 58.2 | 35.6 | 78.2 | **57.0** | **64.4** |
| multilingual-e5-large | 560M | 60.2 | 45.4 | 80.3 | **90.3** | 59.1 | 29.5 | 78.8 | 55.3 | 63.4 |
| Arctic-embed-l-v2.0 | 568M | 59.3 | 45.2 | 74.2 | 88.2 | 59.0 | 29.8 | 71.7 | 54.3 | 61.1 |
| bge-m3 | 568M | 60.7 | 44.2 | 78.3 | 88.7 | **60.0** | 29.2 | 78.1 | 55.4 | 63.1 |
| jina-embeddings-v3 | 572M | 61.7 | 38.9 | 76.8 | 78.5 | 59.1 | **38.9** | **84.8** | **57.0** | 62.7 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 1.0
### Framework versions
- Transformers 4.56.1
- Pytorch 2.7.1+cu128
- Datasets 4.0.0
- Tokenizers 0.22.0
## Citation Information
If you find our paper, benchmark or models helpful, please consider cite as follows:
```latex
@misc{banar2025mtebnle5nlembeddingbenchmark,
title={MTEB-NL and E5-NL: Embedding Benchmark and Models for Dutch},
author={Nikolay Banar and Ehsan Lotfi and Jens Van Nooten and Cristina Arhiliuc and Marija Kliocaite and Walter Daelemans},
year={2025},
eprint={2509.12340},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.12340},
}
```
[//]: # (https://arxiv.org/abs/2509.12340)
|
tyanfarm/llama3-8b-hotels-information-finetuned
|
tyanfarm
| 2025-09-23T10:35:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T10:35:05Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kraasa/My1stmodel
|
Kraasa
| 2025-09-23T10:32:57Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T10:32:57Z |
---
license: apache-2.0
---
|
AngelinaZanardi/educational_value_fasttext_gridsearch_dan
|
AngelinaZanardi
| 2025-09-23T10:30:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-09T07:49:44Z |
# Educational Score FastText Model
- Trained on `AngelinaZanardi/fineweb-kimi-k2-instruct-dan_cleaned`
- Target column: `educational_score`
- Best Hyperparameters: {'lr': 0.05, 'epoch': 50, 'wordNgrams': 1, 'dim': 300, 'minCount': 5, 'loss': 'softmax', 'ws': 7, 'minn': 3, 'maxn': 6}
- Validation F1: 0.4993
- Test F1: 0.4892
Best params: {'lr': 0.05, 'epoch': 50, 'wordNgrams': 1, 'dim': 300, 'minCount': 5, 'loss': 'softmax', 'ws': 7, 'minn': 3, 'maxn': 6}
✅ Best Validation Weighted F1: 0.4993
✅ Test Weighted F1: 0.4892
Confusion Matrix:
[[135 37 1 4 0 0]
[ 43 92 2 21 0 0]
[ 4 39 1 30 1 0]
[ 0 34 2 54 1 0]
[ 0 4 1 29 7 0]
[ 0 0 0 5 1 0]]
Classification Report:
precision recall f1-score support
0 0.74 0.76 0.75 177
1 0.45 0.58 0.51 158
2 0.14 0.01 0.02 75
3 0.38 0.59 0.46 91
4 0.70 0.17 0.27 41
5 0.00 0.00 0.00 6
accuracy 0.53 548
macro avg 0.40 0.35 0.34 548
weighted avg 0.50 0.53 0.49 548
|
AbdulManaf12/medgemma-4b-it-sft-Medtrinity-25m-subset
|
AbdulManaf12
| 2025-09-23T10:29:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T06:09:15Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-Medtrinity-25m-subset
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-4b-it-sft-Medtrinity-25m-subset
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AbdulManaf12/medgemma-4b-it-sft-Medtrinity-25m-subset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/abdulmanaf/medgemma-4b-it-sft-Medtrinity-25m-subset/runs/g8rb6jra)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.2
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
LeonardoBenitez/distil_Bush_to_Blair
|
LeonardoBenitez
| 2025-09-23T10:29:47Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"model-index",
"region:us"
] | null | 2025-08-20T16:23:12Z |
---
hyperparameters:
lora_r: 4
lora_alpha: 4.0
lora_dropout: 0.1
is_lora_negated: true
overwritting_concept: a white male
model_name_or_path: CompVis/stable-diffusion-v1-4
tokenizer_name: null
dataset_forget_name: assets/datasets/lfw_splits_filtered/George_W_Bush/train_forget
dataset_retain_name: assets/datasets/lfw_splits_filtered/George_W_Bush/train_retain
dataset_forget_config_name: null
dataset_retain_config_name: null
image_column: image
caption_column: text
validation_prompt: An image of George_W_Bush
num_validation_images: 1
validation_epochs: 1
resolution: 512
center_crop: false
random_flip: true
max_train_samples: null
dataloader_num_workers: 2
prediction_type: null
do_train: true
do_eval: false
per_device_train_batch_size: 1
gradient_accumulation_steps: 2
num_train_epochs: 400
learning_rate: 0.0001
lr_scheduler_type: cosine
output_dir: assets/models/people_George_W_Bush_distil_400
logging_dir: logs
logging_steps: 20
save_strategy: epoch
save_total_limit: 2
seed: 42
should_log: true
local_rank: -1
device: cuda
n_gpu: 1
gradient_checkpointing: false
enable_xformers_memory_efficient_attention: false
mixed_precision: 'no'
allow_tf32: false
use_8bit_adam: false
report_to: tensorboard
cache_dir: null
hub_token: null
hub_model_id: LeonardoBenitez/distil_Bush_to_Blair
revision: null
variant: null
compute_gradient_conflict: false
compute_runtimes: true
max_train_steps: 800
lr_warmup_steps: 0
adam_beta1: 0.9
adam_beta2: 0.999
adam_weight_decay: 0.01
adam_epsilon: 1.0e-08
max_grad_norm: 1.0
checkpointing_steps: 10000
checkpoints_total_limit: null
resume_from_checkpoint: null
noise_offset: 0.0
model-index:
- name: LeonardoBenitez/distil_Bush_to_Blair
results:
- task:
type: text-to-image
dataset:
name: Forget set
type: inline-prompts
metrics:
- type: clip
value: 30.52748680114746
name: ForgetSet clip score of original model mean (~↑)
- type: clip
value: 0.9344921112060547
name: ForgetSet clip score of original model std (~↓)
- type: clip
value: 26.615293502807617
name: ForgetSet clip score of learned model mean (~↑)
- type: clip
value: 0.8145618438720703
name: ForgetSet clip score of learned model std (~↓)
- type: clip
value: 28.85525417327881
name: ForgetSet clip score of unlearned model mean (↓)
- type: clip
value: 4.7053422927856445
name: ForgetSet clip score of unlearned model std (~↓)
- type: clip
value: -2.2399606704711914
name: ForgetSet clip score difference between learned and unlearned mean (↑)
- type: clip
value: 3.890780448913574
name: ForgetSet clip score difference between learned and unlearned std (~↓)
- type: clip
value: 1.6722326278686523
name: ForgetSet clip score difference between original and unlearned mean (↑)
- type: clip
value: 3.77085018157959
name: ForgetSet clip score difference between original and unlearned std (~↓)
- type: clip
value: 27.73976421356201
name: RetainSet clip score of original model mean (~↑)
- type: clip
value: 0.7074308395385742
name: RetainSet clip score of original model std (~↓)
- type: clip
value: 26.015289306640625
name: RetainSet clip score of learned model mean (~↓)
- type: clip
value: 0.16153907775878906
name: RetainSet clip score of learned model std (~↓)
- type: clip
value: 28.017294883728027
name: RetainSet clip score of unlearned model mean (↑)
- type: clip
value: 1.3540773391723633
name: RetainSet clip score of unlearned model std (~↓)
- type: clip
value: -2.0020055770874023
name: RetainSet clip score difference between learned and unlearned mean (↓)
- type: clip
value: 1.5156164169311523
name: RetainSet clip score difference between learned and unlearned std (~↓)
- type: clip
value: -0.2775306701660156
name: RetainSet clip score difference between original and unlearned mean (↓)
- type: clip
value: 2.0615081787109375
name: RetainSet clip score difference between original and unlearned std (~↓)
- type: runtime
value: 4.920219779014587
name: Inference latency seconds mean (↓)
- type: runtime
value: 0.06971577803293894
name: Inference latency seconds std (~↓)
- task:
type: text-to-image
dataset:
name: assets/datasets/lfw_splits_filtered/George_W_Bush/train_forget (forget)
and assets/datasets/lfw_splits_filtered/George_W_Bush/train_retain (retain)
sets
type: forget-and-retain-together
metrics:
- type: runtime
value: 5.044015645980835
name: Runtime init seconds (~↓)
- type: runtime
value: 2.981912136077881
name: Runtime data loading seconds (~↓)
- type: runtime
value: 5290.758367776871
name: Runtime training seconds (↓)
- type: runtime
value: 109.9514970779419
name: Runtime eval seconds (~↓)
---
|
csikasote/mms-1b-all-bemgen-combined-m25f100-42-DAT-0.6
|
csikasote
| 2025-09-23T10:29:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T09:26:56Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m25f100-42-DAT-0.6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m25f100-42-DAT-0.6
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2659
- Cer: 0.0743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 8.4214 | 0.6711 | 100 | 2.9472 | 1.0 |
| 2.7503 | 1.3423 | 200 | 0.6938 | 0.1566 |
| 1.4977 | 2.0134 | 300 | 0.3654 | 0.1076 |
| 1.3122 | 2.6846 | 400 | 0.3092 | 0.0892 |
| 1.1842 | 3.3557 | 500 | 0.2925 | 0.0827 |
| 1.133 | 4.0268 | 600 | 0.2855 | 0.0790 |
| 1.1082 | 4.6980 | 700 | 0.2822 | 0.0790 |
| 1.1323 | 5.3691 | 800 | 0.2781 | 0.0783 |
| 1.0934 | 6.0403 | 900 | 0.2789 | 0.0796 |
| 1.1587 | 6.7114 | 1000 | 0.2786 | 0.0783 |
| 1.1053 | 7.3826 | 1100 | 0.2738 | 0.0769 |
| 0.9991 | 8.0537 | 1200 | 0.2755 | 0.0786 |
| 1.0604 | 8.7248 | 1300 | 0.2725 | 0.0758 |
| 1.0289 | 9.3960 | 1400 | 0.2692 | 0.0756 |
| 1.0385 | 10.0671 | 1500 | 0.2704 | 0.0752 |
| 0.9775 | 10.7383 | 1600 | 0.2689 | 0.0754 |
| 0.9852 | 11.4094 | 1700 | 0.2742 | 0.0771 |
| 0.9604 | 12.0805 | 1800 | 0.2686 | 0.0758 |
| 0.9903 | 12.7517 | 1900 | 0.2698 | 0.0760 |
| 0.9954 | 13.4228 | 2000 | 0.2659 | 0.0743 |
| 1.0213 | 14.0940 | 2100 | 0.2669 | 0.0750 |
| 0.9287 | 14.7651 | 2200 | 0.2688 | 0.0750 |
| 1.0408 | 15.4362 | 2300 | 0.2679 | 0.0750 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
Andrei1980/sft-ygpt-adapter-systems
|
Andrei1980
| 2025-09-23T10:28:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T10:28:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RMCian/AceInstruct-1.5B-Gensyn-Swarm-fast_rabid_ram
|
RMCian
| 2025-09-23T10:25:21Z | 114 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am fast_rabid_ram",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T15:55:39Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am fast_rabid_ram
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fatepurriyaz/AceInstruct-1.5B-Gensyn-Swarm-small_deft_jaguar
|
fatepurriyaz
| 2025-09-23T10:24:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am small_deft_jaguar",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:23:24Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am small_deft_jaguar
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
clips/e5-large-trm-nl
|
clips
| 2025-09-23T10:22:36Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"nl",
"dataset:clips/beir-nl-mmarco",
"dataset:clips/beir-nl-hotpotqa",
"dataset:clips/beir-nl-fever",
"arxiv:2509.12340",
"base_model:clips/e5-large-trm",
"base_model:finetune:clips/e5-large-trm",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-15T15:16:31Z |
---
library_name: transformers
license: mit
datasets:
- clips/beir-nl-mmarco
- clips/beir-nl-hotpotqa
- clips/beir-nl-fever
language:
- nl
base_model:
- clips/e5-large-trm
pipeline_tag: sentence-similarity
---
# E5-large-trm-nl
This model is a fine-tuned version of [clips/e5-large-trm](https://huggingface.co/clips/e5-large-trm).
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = [
'query: hoeveel eiwitten moet een vrouw eten',
'query: top definieer',
"passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
"passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
]
tokenizer = AutoTokenizer.from_pretrained('clips/e5-large-trm-nl')
model = AutoModel.from_pretrained('clips/e5-large-trm-nl')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('clips/e5-large-trm-nl')
input_texts = [
'query: hoeveel eiwitten moet een vrouw eten',
'query: top definieer',
"passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
"passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
## Benchmark Evaluation
Results on MTEB-NL (models introduced in [our paper](https://arxiv.org/abs/2509.12340) and the best model per size category are highlighted in bold):
| Model | Prm | Cls | MLCls | PCls | Rrnk | Rtr | Clust | STS | AvgD | AvgT |
|---------------------------------------|------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| **Num. Datasets (→)** | | 12 | 3 | 2 | 1 | 12 | 8 | 2 | 40 | |
| **Supervised (small, <100M)** | | | | | | | | | | |
| **e5-small-v2-t2t** | 33M | 53.7 | 38.5 | 74.5 | 85.9 | 45.0 | 24.1 | 74.3 | 46.9 | 56.6 |
| **e5-small-v2-t2t-nl** | 33M | 55.3 | 40.9 | 74.9 | 86.0 | 49.9 | 28.0 | 74.1 | 49.8 | 58.4 |
| **e5-small-trm** | 41M | 56.3 | 43.5 | **76.5** | **87.3** | 53.1 | 28.2 | 74.2 | 51.4 | 59.9 |
| **e5-small-trm-nl** | 41M | **58.2** | **44.7** | 76.0 | 87.1 | **56.0** | **32.2** | **74.6** | **53.8** | **61.3** |
| **Supervised (base, <305M)** | | | | | | | | | | |
| granite-embedding-107m-multilingual | 107M | 53.9 | 41.8 | 70.1 | 84.7 | 50.2 | 29.8 | 68.4 | 49.4 | 57.0 |
| **e5-base-v2-t2t** | 109M | 54.4 | 40.3 | 73.3 | 85.6 | 46.2 | 25.5 | 73.2 | 47.8 | 56.9 |
| **e5-base-v2-t2t-nl** | 109M | 53.9 | 41.5 | 72.5 | 84.0 | 46.4 | 26.9 | 69.3 | 47.8 | 56.3 |
| multilingual-e5-small | 118M | 56.3 | 43.5 | 76.5 | 87.1 | 53.1 | 28.2 | 74.2 | 51.4 | 59.8 |
| paraphrase-multilingual-MiniLM-L12-v2 | 118M | 55.0 | 38.1 | 78.2 | 80.6 | 37.7 | 29.6 | 76.3 | 46.3 | 56.5 |
| **RobBERT-2023-base-ft** | 124M | 58.1 | 44.6 | 72.7 | 84.7 | 51.6 | 32.9 | 68.5 | 52.0 | 59.0 |
| **e5-base-trm** | 124M | 58.1 | 44.4 | 76.7 | 88.3 | 55.8 | 28.1 | 74.9 | 52.9 | 60.9 |
| **e5-base-trm-nl** | 124M | **59.6** | **45.9** | 78.4 | 87.5 | 56.5 | **34.3** | 75.8 | **55.0** | **62.6** |
| potion-multilingual-128M | 128M | 51.8 | 40.0 | 60.4 | 80.3 | 35.7 | 26.1 | 62.0 | 42.6 | 50.9 |
| multilingual-e5-base | 278M | 58.2 | 44.4 | 76.7 | **88.4** | 55.8 | 27.7 | 74.9 | 52.8 | 60.9 |
| granite-embedding-278m-multilingual | 278M | 54.6 | 41.8 | 71.0 | 85.6 | 52.4 | 30.3 | 68.9 | 50.5 | 58.0 |
| paraphrase-multilingual-mpnet-base-v2 | 278M | 58.1 | 40.5 | **81.9** | 82.3 | 41.4 | 30.8 | 79.3 | 49.2 | 59.2 |
| Arctic-embed-m-v2.0 | 305M | 54.4 | 42.6 | 66.6 | 86.2 | 51.8 | 26.5 | 64.9 | 49.1 | 56.1 |
| gte-multilingual-base | 305M | 59.1 | 37.7 | 77.8 | 82.3 | **56.8** | 31.3 | **78.6** | 53.8 | 60.5 |
| **Supervised (large, >305M)** | | | | | | | | | | |
| **e5-large-v2-t2t** | 335M | 55.7 | 41.4 | 75.7 | 86.6 | 49.9 | 25.5 | 74.0 | 49.5 | 58.4 |
| **e5-large-v2-t2t-nl** | 335M | 57.3 | 42.4 | 76.9 | 86.9 | 50.8 | 27.7 | 74.1 | 51.7 | 59.4 |
| **RobBERT-2023-large-ft** | 355M | 59.3 | 45.2 | 68.7 | 82.3 | 48.3 | 31.6 | 70.6 | 51.0 | 58.0 |
| **e5-large-trm** | 355M | 60.2 | 45.4 | 80.3 | 90.3 | 59.0 | 28.7 | 78.8 | 55.1 | 63.3 |
| **e5-large-trm-nl** | 355M | **62.2** | **48.0** | **81.4** | 87.2 | 58.2 | 35.6 | 78.2 | **57.0** | **64.4** |
| multilingual-e5-large | 560M | 60.2 | 45.4 | 80.3 | **90.3** | 59.1 | 29.5 | 78.8 | 55.3 | 63.4 |
| Arctic-embed-l-v2.0 | 568M | 59.3 | 45.2 | 74.2 | 88.2 | 59.0 | 29.8 | 71.7 | 54.3 | 61.1 |
| bge-m3 | 568M | 60.7 | 44.2 | 78.3 | 88.7 | **60.0** | 29.2 | 78.1 | 55.4 | 63.1 |
| jina-embeddings-v3 | 572M | 61.7 | 38.9 | 76.8 | 78.5 | 59.1 | **38.9** | **84.8** | **57.0** | 62.7 |
## Citation Information
If you find our paper, benchmark or models helpful, please consider cite as follows:
```latex
@misc{banar2025mtebnle5nlembeddingbenchmark,
title={MTEB-NL and E5-NL: Embedding Benchmark and Models for Dutch},
author={Nikolay Banar and Ehsan Lotfi and Jens Van Nooten and Cristina Arhiliuc and Marija Kliocaite and Walter Daelemans},
year={2025},
eprint={2509.12340},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.12340},
}
```
[//]: # (https://arxiv.org/abs/2509.12340)
|
Naruto123321/unsloth_finetune_0_kaggle
|
Naruto123321
| 2025-09-23T10:21:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-23T10:18:16Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Naruto123321
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jeganmurali/Orpehus_finalv4
|
Jeganmurali
| 2025-09-23T10:19:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T10:18:55Z |
---
base_model: unsloth/orpheus-3b-0.1-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Jeganmurali
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758622705
|
poolkiltzn
| 2025-09-23T10:19:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T10:19:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
csikasote/mms-1b-all-bemgen-combined-m25f100-42-DAT-0.5
|
csikasote
| 2025-09-23T10:19:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T09:16:23Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m25f100-42-DAT-0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m25f100-42-DAT-0.5
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2686
- Cer: 0.0756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 8.3815 | 0.6711 | 100 | 2.9457 | 1.0 |
| 2.7294 | 1.3423 | 200 | 0.6856 | 0.1554 |
| 1.4808 | 2.0134 | 300 | 0.3520 | 0.1023 |
| 1.2927 | 2.6846 | 400 | 0.3147 | 0.0904 |
| 1.1633 | 3.3557 | 500 | 0.2934 | 0.0828 |
| 1.112 | 4.0268 | 600 | 0.2885 | 0.0797 |
| 1.0816 | 4.6980 | 700 | 0.2817 | 0.0792 |
| 1.0822 | 5.3691 | 800 | 0.2798 | 0.0794 |
| 1.0403 | 6.0403 | 900 | 0.2815 | 0.0805 |
| 1.1011 | 6.7114 | 1000 | 0.2835 | 0.0805 |
| 1.061 | 7.3826 | 1100 | 0.2782 | 0.0784 |
| 0.9707 | 8.0537 | 1200 | 0.2800 | 0.0801 |
| 1.0327 | 8.7248 | 1300 | 0.2808 | 0.0780 |
| 0.9957 | 9.3960 | 1400 | 0.2752 | 0.0772 |
| 0.9974 | 10.0671 | 1500 | 0.2755 | 0.0776 |
| 0.9329 | 10.7383 | 1600 | 0.2732 | 0.0766 |
| 0.9618 | 11.4094 | 1700 | 0.2750 | 0.0770 |
| 0.9352 | 12.0805 | 1800 | 0.2714 | 0.0764 |
| 0.9623 | 12.7517 | 1900 | 0.2714 | 0.0763 |
| 0.9589 | 13.4228 | 2000 | 0.2687 | 0.0755 |
| 0.9831 | 14.0940 | 2100 | 0.2712 | 0.0769 |
| 0.8951 | 14.7651 | 2200 | 0.2696 | 0.0756 |
| 1.0025 | 15.4362 | 2300 | 0.2687 | 0.0759 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
great1123/EXAONE-4.0-1.2B-symptom-disease_kor_v1
|
great1123
| 2025-09-23T10:19:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"exaone4",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"ko",
"base_model:LGAI-EXAONE/EXAONE-4.0-1.2B",
"base_model:finetune:LGAI-EXAONE/EXAONE-4.0-1.2B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:05:57Z |
---
base_model: LGAI-EXAONE/EXAONE-4.0-1.2B
tags:
- text-generation-inference
- transformers
- unsloth
- exaone4
license: apache-2.0
language:
- en
- ko
---
# Uploaded finetuned model
- **Developed by:** great1123
- **License:** apache-2.0
- **Finetuned from model :** LGAI-EXAONE/EXAONE-4.0-1.2B
이 모델은 **[LGAI-EXAONE/EXAONE-4.0-1.2B](https://huggingface.co/LGAI-EXAONE/EXAONE-4.0-1.2B)**를 기반으로,
한국어 증상 텍스트를 입력하면 가능한 질환과 간단한 설명을 생성하도록 **LoRA 기반 Supervised Fine-Tuning (SFT)**을 수행한 모델입니다.
---
## 📖 학습 데이터
### 1. 원본 데이터셋
- **[dux-tecblic/symptom-disease-dataset](https://huggingface.co/datasets/dux-tecblic/symptom-disease-dataset)** 사용
- 증상 텍스트와 질환 레이블 매핑
- mapping.json을 이용하여 `id2disease` 역매핑 수행
### 2. Distilled 데이터셋
- `symptom_ko`, `diagnosis_ko`, `reasoning_ko`, `explanation_ko` 필드 포함
- 증상 설명 → 가능한 진단(여러 개) + 간단 설명으로 가공
### 3. Instruction 템플릿
학습 데이터는 **system/user/assistant** 포맷으로 변환:
- 한국어 예시: `"주어진 증상으로부터 예상되는 질환과 간단한 설명을 출력하세요."`
- 영어 예시: `"Given the following symptoms, provide possible diagnoses and a short note."`
최종적으로 약 **8천여 샘플**을 사용하여 학습.
---
## 🛠️ 학습 방법
- **Base Model:** `LGAI-EXAONE/EXAONE-4.0-1.2B`
- **Fine-tuning Method:** LoRA (Low-Rank Adaptation)
- **Framework:** Hugging Face TRL + Unsloth
### 학습 파라미터
- Epochs: 3
- Batch size: 2 (gradient_accumulation_steps=8 → effective batch size ≈16)
- Learning rate: 2e-4
- Scheduler: linear
- Optimizer: AdamW (8bit)
- Weight decay: 0.01
- Precision: bfloat16
- LoRA config: r=16, alpha=32, dropout=0.05, target_modules=["q_proj", "v_proj"]
학습 시 `system`/`user`는 -100 마스킹 처리하여 loss에 포함하지 않고, **assistant 답변만 학습**.
---
## 🚀 사용 방법
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "great1123/EXAONE-4.0-1.2B-symptome-disease_kor_v1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
messages = [
{"role": "system", "content": "아래 증상을 보고 질환을 추정하라."},
{"role": "user", "content": "피로, 체중 증가, 손발 차가움, 무기력, 현기증"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt = True, # Must add for generation
).removeprefix('<bos>')
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 300,
temperature = 0.3, top_p = 0.95, top_k = 20, repetition_penalty=1.5,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
추천 파라미터 : temperature = 0.3, top_p = 0.95, top_k = 20, repetition_penalty=1.5
## ⚠️ 주의사항
- 본 모델은 **의학적 조언을 대체하지 않습니다.**
- 실제 진단 및 치료는 반드시 **의료 전문가**에게 받아야 합니다.
- 본 모델은 연구 및 교육 목적 활용을 권장합니다.
---
## 📜 라이선스
- Base Model: LGAI-EXAONE/EXAONE-4.0-1.2B 라이선스 준수
- 데이터셋: [dux-tecblic/symptom-disease-dataset](https://huggingface.co/datasets/dux-tecblic/symptom-disease-dataset)
- 최종 모델: 연구/비상업적 용도 권장
---
## 📚 Citation
본 모델을 연구에 사용한다면 아래와 같이 인용해 주세요:
```bibtex
@misc{exaone_symptom_disease_2024,
title = {EXAONE-4.0-1.2B-symptome-disease_kor_v1},
author = {great1123},
year = {2024},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/great1123/EXAONE-4.0-1.2B-symptome-disease_kor_v1}}
}
This exaone4 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tdub420/Phi-3-mini-imstruct
|
tdub420
| 2025-09-23T10:15:24Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T10:15:23Z |
---
license: apache-2.0
---
|
kevinmasese1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-large_sedate_nightingale
|
kevinmasese1
| 2025-09-23T10:12:34Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am large_sedate_nightingale",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T17:44:04Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am large_sedate_nightingale
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
clips/e5-base-trm-nl
|
clips
| 2025-09-23T10:09:54Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"generated_from_trainer",
"sentence-similarity",
"nl",
"dataset:clips/beir-nl-mmarco",
"dataset:clips/beir-nl-hotpotqa",
"dataset:clips/beir-nl-fever",
"arxiv:2509.12340",
"base_model:clips/e5-base-trm",
"base_model:finetune:clips/e5-base-trm",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-09T13:56:36Z |
---
library_name: transformers
base_model:
- clips/e5-base-trm
tags:
- generated_from_trainer
model-index:
- name: E5-base-trm-nl
results: []
license: mit
datasets:
- clips/beir-nl-mmarco
- clips/beir-nl-hotpotqa
- clips/beir-nl-fever
language:
- nl
pipeline_tag: sentence-similarity
---
# E5-base-trm-nl
This model is a fine-tuned version of [clips/e5-base-trm](https://huggingface.co/clips/e5-base-trm).
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = [
'query: hoeveel eiwitten moet een vrouw eten',
'query: top definieer',
"passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
"passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
]
tokenizer = AutoTokenizer.from_pretrained('clips/e5-base-trm-nl')
model = AutoModel.from_pretrained('clips/e5-base-trm-nl')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('clips/e5-base-trm-nl')
input_texts = [
'query: hoeveel eiwitten moet een vrouw eten',
'query: top definieer',
"passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
"passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
## Benchmark Evaluation
Results on MTEB-NL (models introduced in [our paper](https://arxiv.org/abs/2509.12340) and the best model per size category are highlighted in bold):
| Model | Prm | Cls | MLCls | PCls | Rrnk | Rtr | Clust | STS | AvgD | AvgT |
|---------------------------------------|------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| **Num. Datasets (→)** | | 12 | 3 | 2 | 1 | 12 | 8 | 2 | 40 | |
| **Supervised (small, <100M)** | | | | | | | | | | |
| **e5-small-v2-t2t** | 33M | 53.7 | 38.5 | 74.5 | 85.9 | 45.0 | 24.1 | 74.3 | 46.9 | 56.6 |
| **e5-small-v2-t2t-nl** | 33M | 55.3 | 40.9 | 74.9 | 86.0 | 49.9 | 28.0 | 74.1 | 49.8 | 58.4 |
| **e5-small-trm** | 41M | 56.3 | 43.5 | **76.5** | **87.3** | 53.1 | 28.2 | 74.2 | 51.4 | 59.9 |
| **e5-small-trm-nl** | 41M | **58.2** | **44.7** | 76.0 | 87.1 | **56.0** | **32.2** | **74.6** | **53.8** | **61.3** |
| **Supervised (base, <305M)** | | | | | | | | | | |
| granite-embedding-107m-multilingual | 107M | 53.9 | 41.8 | 70.1 | 84.7 | 50.2 | 29.8 | 68.4 | 49.4 | 57.0 |
| **e5-base-v2-t2t** | 109M | 54.4 | 40.3 | 73.3 | 85.6 | 46.2 | 25.5 | 73.2 | 47.8 | 56.9 |
| **e5-base-v2-t2t-nl** | 109M | 53.9 | 41.5 | 72.5 | 84.0 | 46.4 | 26.9 | 69.3 | 47.8 | 56.3 |
| multilingual-e5-small | 118M | 56.3 | 43.5 | 76.5 | 87.1 | 53.1 | 28.2 | 74.2 | 51.4 | 59.8 |
| paraphrase-multilingual-MiniLM-L12-v2 | 118M | 55.0 | 38.1 | 78.2 | 80.6 | 37.7 | 29.6 | 76.3 | 46.3 | 56.5 |
| **RobBERT-2023-base-ft** | 124M | 58.1 | 44.6 | 72.7 | 84.7 | 51.6 | 32.9 | 68.5 | 52.0 | 59.0 |
| **e5-base-trm** | 124M | 58.1 | 44.4 | 76.7 | 88.3 | 55.8 | 28.1 | 74.9 | 52.9 | 60.9 |
| **e5-base-trm-nl** | 124M | **59.6** | **45.9** | 78.4 | 87.5 | 56.5 | **34.3** | 75.8 | **55.0** | **62.6** |
| potion-multilingual-128M | 128M | 51.8 | 40.0 | 60.4 | 80.3 | 35.7 | 26.1 | 62.0 | 42.6 | 50.9 |
| multilingual-e5-base | 278M | 58.2 | 44.4 | 76.7 | **88.4** | 55.8 | 27.7 | 74.9 | 52.8 | 60.9 |
| granite-embedding-278m-multilingual | 278M | 54.6 | 41.8 | 71.0 | 85.6 | 52.4 | 30.3 | 68.9 | 50.5 | 58.0 |
| paraphrase-multilingual-mpnet-base-v2 | 278M | 58.1 | 40.5 | **81.9** | 82.3 | 41.4 | 30.8 | 79.3 | 49.2 | 59.2 |
| Arctic-embed-m-v2.0 | 305M | 54.4 | 42.6 | 66.6 | 86.2 | 51.8 | 26.5 | 64.9 | 49.1 | 56.1 |
| gte-multilingual-base | 305M | 59.1 | 37.7 | 77.8 | 82.3 | **56.8** | 31.3 | **78.6** | 53.8 | 60.5 |
| **Supervised (large, >305M)** | | | | | | | | | | |
| **e5-large-v2-t2t** | 335M | 55.7 | 41.4 | 75.7 | 86.6 | 49.9 | 25.5 | 74.0 | 49.5 | 58.4 |
| **e5-large-v2-t2t-nl** | 335M | 57.3 | 42.4 | 76.9 | 86.9 | 50.8 | 27.7 | 74.1 | 51.7 | 59.4 |
| **RobBERT-2023-large-ft** | 355M | 59.3 | 45.2 | 68.7 | 82.3 | 48.3 | 31.6 | 70.6 | 51.0 | 58.0 |
| **e5-large-trm** | 355M | 60.2 | 45.4 | 80.3 | 90.3 | 59.0 | 28.7 | 78.8 | 55.1 | 63.3 |
| **e5-large-trm-nl** | 355M | **62.2** | **48.0** | **81.4** | 87.2 | 58.2 | 35.6 | 78.2 | **57.0** | **64.4** |
| multilingual-e5-large | 560M | 60.2 | 45.4 | 80.3 | **90.3** | 59.1 | 29.5 | 78.8 | 55.3 | 63.4 |
| Arctic-embed-l-v2.0 | 568M | 59.3 | 45.2 | 74.2 | 88.2 | 59.0 | 29.8 | 71.7 | 54.3 | 61.1 |
| bge-m3 | 568M | 60.7 | 44.2 | 78.3 | 88.7 | **60.0** | 29.2 | 78.1 | 55.4 | 63.1 |
| jina-embeddings-v3 | 572M | 61.7 | 38.9 | 76.8 | 78.5 | 59.1 | **38.9** | **84.8** | **57.0** | 62.7 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 1.0
### Framework versions
- Transformers 4.56.1
- Pytorch 2.7.1+cu128
- Datasets 4.0.0
- Tokenizers 0.22.0
## Citation Information
If you find our paper, benchmark or models helpful, please consider cite as follows:
```latex
@misc{banar2025mtebnle5nlembeddingbenchmark,
title={MTEB-NL and E5-NL: Embedding Benchmark and Models for Dutch},
author={Nikolay Banar and Ehsan Lotfi and Jens Van Nooten and Cristina Arhiliuc and Marija Kliocaite and Walter Daelemans},
year={2025},
eprint={2509.12340},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.12340},
}
```
[//]: # (https://arxiv.org/abs/2509.12340)
|
irisWU23/smolVLA_libero
|
irisWU23
| 2025-09-23T10:07:14Z | 182 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:physical-intelligence/libero",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-09T20:20:33Z |
---
base_model: lerobot/smolvla_base
datasets: physical-intelligence/libero
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- robotics
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
langtuphongtran/dich-vu-thay-man-hinh-dien-thoai-iphone-uy-tin
|
langtuphongtran
| 2025-09-23T10:05:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T09:56:31Z |
<p>Trong thời đại công nghệ phát triển, iPhone không chỉ là công cụ liên lạc mà còn là trợ thủ đắc lực trong công việc và giải trí. Tuy nhiên, màn hình iPhone dễ gặp các vấn đề như nứt vỡ, sọc màn hình hoặc lỗi cảm ứng do va chạm hay hao mòn. Khi đó, việc tìm một <a href="https://justpaste.it/u/thaymhip24h">hệ thống thay màn hình điện thoại iPhone uy tín</a> là yếu tố then chốt để đảm bảo thiết bị hoạt động ổn định và giữ được giá trị lâu dài.</p>

<h2><strong>Vì Sao Nên Chọn Dịch Vụ Thay Màn Hình iPhone Uy Tín?</strong></h2>
<p>Một địa chỉ sửa chữa đáng tin cậy không chỉ giúp khôi phục màn hình iPhone như mới mà còn mang lại nhiều lợi ích vượt trội:</p>
<ul>
<li><strong>Linh kiện chính hãng</strong>: Sử dụng màn hình đạt chuẩn Apple, đảm bảo tương thích với mọi dòng iPhone, từ iPhone 11 đến iPhone 16 Pro Max.</li>
<li><strong>Kỹ thuật viên chuyên nghiệp</strong>: Đội ngũ có kinh nghiệm xử lý chính xác, nhanh chóng, tránh rủi ro phát sinh.</li>
<li><strong>Bảo hành dài hạn</strong>: Cam kết bảo hành từ 6 tháng đến vĩnh viễn, bao gồm cả trường hợp rơi vỡ, mang lại sự an tâm tuyệt đối.</li>
</ul>
<p>Tại <strong>Bệnh Viện Điện Thoại, Laptop 24h</strong>, mọi quy trình thay màn hình đều được thực hiện minh bạch. Thiết bị của bạn sẽ được kiểm tra kỹ lưỡng trước khi sửa chữa, đảm bảo không phát sinh chi phí ngoài dự kiến. Với hơn 20 năm kinh nghiệm, trung tâm tự hào là lựa chọn hàng đầu cho hàng ngàn khách hàng tại TP.HCM.</p>

<h2><strong>Dấu Hiệu Nhận Biết Khi Nào Cần Thay Màn Hình iPhone</strong></h2>
<p>Không phải mọi vết trầy xước đều yêu cầu thay màn hình ngay, nhưng một số dấu hiệu cho thấy bạn cần hành động kịp thời:</p>
<ul>
<li><strong>Màn hình sọc, đốm hoặc điểm chết</strong>: Lỗi hiển thị này có thể lan rộng, ảnh hưởng trải nghiệm sử dụng.</li>
<li><strong>Cảm ứng chập chờn hoặc không phản hồi</strong>: Thao tác vuốt, chạm bị trễ, cho thấy lớp cảm ứng đã hỏng.</li>
<li><strong>Mặt kính nứt vỡ</strong>: Dù hiển thị bình thường, kính vỡ tiềm ẩn nguy cơ gây hại linh kiện bên trong.</li>
<li><strong>Màn hình ám màu hoặc loang mực</strong>: Đây là dấu hiệu màn hình bị tổn thương nghiêm trọng, cần thay mới ngay.</li>
<li><strong>Hở sáng hoặc màn hình lỏng lẻo</strong>: Thường do va đập mạnh hoặc lắp ráp không chuẩn từ lần sửa trước.</li>
</ul>

ệ Thay Màn Hình Tiên Tiến Tại Bệnh Viện Điện Thoại, Laptop 24h</strong></h2>
<p>Bệnh Viện Điện Thoại, Laptop 24h sử dụng công nghệ hiện đại để đảm bảo chất lượng sửa chữa tối ưu:</p>
<ul>
<li><strong>Máy ép kính chân không</strong>: Giúp màn hình mới được lắp đặt chính xác, không bọt khí, giữ độ sắc nét tối đa.</li>
<li><strong>Kiểm tra cảm ứng đa điểm</strong>: Đảm bảo màn hình hoạt động mượt mà, tương thích hoàn toàn với thiết bị.</li>
<li><strong>Dịch vụ tận nơi</strong>: Hỗ trợ sửa chữa tại nhà, tiết kiệm thời gian cho khách hàng bận rộn.</li>
</ul>
<p>Thời gian thay màn hình chỉ từ 30-90 phút, giúp bạn nhanh chóng sử dụng lại thiết bị như mới. Trung tâm còn cung cấp dịch vụ mượn điện thoại miễn phí trong thời gian sửa chữa, đảm bảo bạn không bị gián đoạn công việc.</p>
<h2><strong>Giá Thay Màn Hình iPhone: Minh Bạch và Cạnh Tranh</strong></h2>
<p>Chi phí thay màn hình iPhone phụ thuộc vào dòng máy, mức độ hư hỏng và loại linh kiện bạn chọn. Để giúp bạn dễ dàng cân nhắc, <a href="https://m.ok.ru/profile/910176562322/statuses/158189073376658">bảng giá thay màn hình điện thoại iPhone</a> tại Bệnh Viện Điện Thoại, Laptop 24h được công khai rõ ràng, phù hợp với mọi model từ iPhone 11 đến iPhone 16. Giá cả cạnh tranh, không phí ẩn, đi kèm ưu đãi giảm 10% khi đặt lịch trước. Đặc biệt, trung tâm cam kết hoàn tiền 100% nếu bạn không hài lòng với chất lượng dịch vụ.</p>
<h2><strong>Cam Kết Dịch Vụ Chuyên Nghiệp</strong></h2>
<p>Bệnh Viện Điện Thoại, Laptop 24h đặt sự hài lòng của khách hàng lên hàng đầu với các cam kết:</p>
<ul>
<li>Linh kiện chính hãng, có nguồn gốc rõ ràng.</li>
<li>Quy trình sửa chữa công khai, không tráo đổi linh kiện.</li>
<li>Tư vấn và kiểm tra máy miễn phí, kể cả khi bạn chưa quyết định sửa.</li>
<li>Bảo hành vĩnh viễn cho màn hình chính hãng Apple, bao gồm cả trường hợp rơi vỡ.</li>
<li>Hoàn tiền 100% nếu dịch vụ không đạt yêu cầu.</li>
</ul>

<h2><strong>Quy Trình Thay Màn Hình iPhone Minh Bạch</strong></h2>
<p>Quy trình thay màn hình tại trung tâm được thực hiện bài bản qua 5 bước:</p>
<ol>
<li><strong>Tiếp nhận thiết bị</strong>: Nhân viên lắng nghe tình trạng máy và ghi nhận yêu cầu.</li>
<li><strong>Kiểm tra và báo giá</strong>: Kỹ thuật viên đánh giá lỗi và tư vấn giải pháp với chi phí minh bạch.</li>
<li><strong>Thay màn hình</strong>: Thực hiện thay thế cẩn thận, sử dụng công nghệ hiện đại.</li>
<li><strong>Kiểm tra lần cuối</strong>: Test kỹ lưỡng cảm ứng, màu sắc, True Tone và Face ID trước khi bàn giao.</li>
<li><strong>Hỗ trợ sau sửa chữa</strong>: Liên hệ khách hàng để đảm bảo chất lượng dịch vụ và giải đáp thắc mắc.</li>
</ol>

<h2><strong>Lựa Chọn Loại Màn Hình Phù Hợp</strong></h2>
<p>Bệnh Viện Điện Thoại, Laptop 24h cung cấp đa dạng loại màn hình, từ linh kiện giá rẻ như Incell JK đến màn hình chính hãng Apple Like New, đáp ứng mọi nhu cầu và ngân sách. Màn hình chính hãng đảm bảo giữ nguyên Face ID, True Tone và mang lại trải nghiệm như máy mới.</p>
<h2><strong>Liên Hệ Ngay Để Nhận Ưu Đãi</strong></h2>
<p>Nếu iPhone của bạn gặp vấn đề về màn hình, đừng chần chừ! Hãy đến với hệ thống thay màn hình điện thoại iPhone uy tín tại <strong>Bệnh Viện Điện Thoại, Laptop 24h</strong> để trải nghiệm dịch vụ chuyên nghiệp, giá cả hợp lý và bảo hành dài hạn. Gọi ngay hotline <strong>1900.0213</strong> để được tư vấn miễn phí và đặt lịch sửa chữa nhanh chóng. Chiếc iPhone của bạn xứng đáng được chăm sóc bởi những chuyên gia hàng đầu!</p>
<p><br /><br /></p>
|
csikasote/mms-1b-all-bemgen-combined-m25f100-42-DAT-0.4
|
csikasote
| 2025-09-23T10:02:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T09:15:59Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m25f100-42-DAT-0.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m25f100-42-DAT-0.4
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2748
- Cer: 0.0783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 8.3425 | 0.6711 | 100 | 2.9454 | 1.0 |
| 2.7103 | 1.3423 | 200 | 0.6770 | 0.1536 |
| 1.4629 | 2.0134 | 300 | 0.3663 | 0.1086 |
| 1.2774 | 2.6846 | 400 | 0.3101 | 0.0893 |
| 1.1474 | 3.3557 | 500 | 0.2959 | 0.0840 |
| 1.0958 | 4.0268 | 600 | 0.2869 | 0.0808 |
| 1.0639 | 4.6980 | 700 | 0.2810 | 0.0787 |
| 1.0592 | 5.3691 | 800 | 0.2748 | 0.0782 |
| 1.0114 | 6.0403 | 900 | 0.2752 | 0.0784 |
| 1.0524 | 6.7114 | 1000 | 0.2776 | 0.0780 |
| 1.0245 | 7.3826 | 1100 | 0.2727 | 0.0762 |
| 0.9377 | 8.0537 | 1200 | 0.2731 | 0.0780 |
| 0.9917 | 8.7248 | 1300 | 0.2733 | 0.0762 |
| 0.9604 | 9.3960 | 1400 | 0.2690 | 0.0753 |
| 0.9593 | 10.0671 | 1500 | 0.2735 | 0.0770 |
| 0.8999 | 10.7383 | 1600 | 0.2713 | 0.0766 |
| 0.9326 | 11.4094 | 1700 | 0.2726 | 0.0762 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
valleriee/dolly-only-chat
|
valleriee
| 2025-09-23T10:01:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:00:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
atrost/math_sft_40K_trl_SFT_Regularized-0.99_Normalize-True
|
atrost
| 2025-09-23T09:51:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T00:05:20Z |
---
base_model: Qwen/Qwen3-1.7B-Base
library_name: transformers
model_name: math_sft_40K_trl_SFT_Regularized-0.99_Normalize-True
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for math_sft_40K_trl_SFT_Regularized-0.99_Normalize-True
This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="atrost/math_sft_40K_trl_SFT_Regularized-0.99_Normalize-True", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/astrost-university-of-wisconsin-madison/sft-regularized-sft/runs/7fgc7xph)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
codefuse-ai/F2LLM-1.7B
|
codefuse-ai
| 2025-09-23T09:50:49Z | 13 | 3 | null |
[
"safetensors",
"qwen3",
"en",
"dataset:codefuse-ai/F2LLM",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T03:18:20Z |
---
license: apache-2.0
datasets:
- codefuse-ai/F2LLM
language:
- en
base_model:
- Qwen/Qwen3-1.7B
---
F2LLM (Foundation to Feature Large Language Models) are foundation models directly finetuned on 6 million high-quality query-document pairs (available in [codefuse-ai/F2LLM](https://huggingface.co/datasets/codefuse-ai/F2LLM)) covering a diverse range of retrieval, classification, and clustering data, curated solely from open-source datasets without any synthetic data. These models are trained with homogeneous macro batches in a single stage, without sophisticated multi-stage pipelines.
To evaluate F2LLMs on MTEB:
```python
import mteb
import logging
logging.basicConfig(level=logging.INFO)
task_names = ['AmazonCounterfactualClassification', 'ArXivHierarchicalClusteringP2P', 'ArXivHierarchicalClusteringS2S', 'ArguAna', 'AskUbuntuDupQuestions', 'BIOSSES', 'Banking77Classification', 'BiorxivClusteringP2P.v2', 'CQADupstackGamingRetrieval', 'CQADupstackUnixRetrieval', 'ClimateFEVERHardNegatives', 'FEVERHardNegatives', 'FiQA2018', 'HotpotQAHardNegatives', 'ImdbClassification', 'MTOPDomainClassification', 'MassiveIntentClassification', 'MassiveScenarioClassification', 'MedrxivClusteringP2P.v2', 'MedrxivClusteringS2S.v2', 'SCIDOCS', 'SICK-R', 'STS12', 'STS13', 'STS14', 'STS15', 'STS17', 'STS22.v2', 'STSBenchmark', 'SprintDuplicateQuestions', 'StackExchangeClustering.v2', 'StackExchangeClusteringP2P.v2', 'SummEvalSummarization.v2', 'TRECCOVID', 'Touche2020Retrieval.v3', 'ToxicConversationsClassification', 'TweetSentimentExtractionClassification', 'TwentyNewsgroupsClustering.v2', 'TwitterSemEval2015', 'TwitterURLCorpus', 'MindSmallReranking']
tasks = [
mteb.get_task(task_name, languages = ["eng"], eval_splits=["test"], exclusive_language_filter=True)
for task_name in task_names
]
model = mteb.get_model("codefuse-ai/F2LLM-1.7B", device="cuda:0")
evaluation = mteb.MTEB(tasks=tasks)
evaluation.run(model, encode_kwargs={"batch_size": 16})
```
|
zhouyik/github_mirror
|
zhouyik
| 2025-09-23T09:50:22Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T11:19:13Z |
---
license: apache-2.0
---
|
Ephraimmm/Pidgin_llamma_model
|
Ephraimmm
| 2025-09-23T09:48:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T09:48:15Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Ephraimmm
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NivedhN160/smart-mobility-companion
|
NivedhN160
| 2025-09-23T09:46:41Z | 0 | 0 | null |
[
"en",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"license:mit",
"region:us"
] | null | 2025-09-12T14:48:14Z |
---
license: mit
language:
- en
base_model:
- meta-llama/Llama-2-7b-chat-hf
- openai-community/gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vaishali/HiTQA-BnTQA
|
vaishali
| 2025-09-23T09:45:12Z | 0 | 1 | null |
[
"pytorch",
"safetensors",
"lowrestabqa",
"low-resource-table-question-answering",
"indic-table-question-answering",
"hindi-table-question-answering",
"table-question-answering",
"hi",
"dataset:vaishali/hindiTabQA",
"base_model:vaishali/BnTQA-mBart",
"base_model:finetune:vaishali/BnTQA-mBart",
"license:mit",
"region:us"
] |
table-question-answering
| 2024-09-27T13:55:51Z |
---
language: hi
tags:
- lowrestabqa
- low-resource-table-question-answering
- indic-table-question-answering
- hindi-table-question-answering
license: mit
pipeline_tag: table-question-answering
datasets:
- vaishali/hindiTabQA
base_model:
- vaishali/BnTQA-mBart
---
# Usage
```python
import pandas as pd
from datasets import load_dataset
from transformers import MBartForConditionalGeneration
model = MBartForConditionalGeneration.from_pretrained("vaishali/HiTQA-BnTQA")
tokenizer = AutoTokenizer.from_pretrained(args.pretrained_model_name, src_lang="hi_IN", tgt_lang="hi_IN")
forced_bos_id = forced_bos_token_id = tokenizer.lang_code_to_id["hi_IN"]
# linearize table
def process_header(headers: List):
return "<कलाम> " + " | ".join(headers)
def process_row(row: List, row_index: int):
hi2enDigits = {'०': '0', '१': '1', '२': '2', '३': '3', '४': '4', '५': '5', '६': '6', '७': '7', '८': '8',
'९': '9', '.': '.'}
en2hiDigits = {v:k for k, v in hi2enDigits.items()}
row_str = ""
row_cell_values = []
for cell_value in row:
if isinstance(cell_value, int) or isinstance(cell_value, float):
cell_value = convert_engDigit_to_hindi(str(cell_value))
row_cell_values.append(str(cell_value))
else:
row_cell_values.append(cell_value)
row_str += " | ".join([row_cell_values for cell_value in row])
hi_row_index = []
for c in str(row_index):
hi_row_index.append(en2hiDigits[c])
return "<रो " + "".join(hi_row_index) + "> " + row_str
def process_table(table_content: Dict):
table_str = process_header(table_content["header"]) + " "
for i, row_example in enumerate(table_content["rows"]):
table_str += process_row(row_example, row_index=i + 1) + " "
return table_str.strip()
# load the dataset
hinditableQA = load_dataset("vaishali/hindiTabQA")
for sample in hinditableQA['train']:
question = sample['question']
input_table = pd.read_json(sample['table'], orient='split')
answer = pd.read_json(sample['answer'], orient='split')
# create the input sequence: query + linearized input table
table_content = {"header": list(input_table.columns)[1:], "rows": [list(row.values)[1:] for i, row in input_table.iterrows()]}
linearized_inp_table = process_table(table_content)
linearized_output_table = process_table({"name": None, "header": [translate_column(col) for col in list(answer.columns)],
"rows": [list(row.values) for i, row in answer.iterrows()]})
source = query + " " + linearized_inp_table
target = linearized_output_table
input = tokenizer(source,
return_tensors="pt",
padding="max_length",
truncation="longest_first",
max_length=1024,
add_special_tokens=True)
with tokenizer.as_target_tokenizer():
labels = tokenizer(target,
return_tensors="pt",
padding="max_length",
truncation="longest_first",
max_length=1024,
add_special_tokens=True).input_ids
# inference
out = model.generate(input["input_ids"].to("cuda"), num_beams=5, return_dict_in_generate=True,
output_scores=True, max_length=1024)
```
# BibTeX entry and citation info
```
@inproceedings{pal-etal-2024-table,
title = "Table Question Answering for Low-resourced {I}ndic Languages",
author = "Pal, Vaishali and
Kanoulas, Evangelos and
Yates, Andrew and
de Rijke, Maarten",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.5",
pages = "75--92",
abstract = "TableQA is the task of answering questions over tables of structured information, returning individual cells or tables as output. TableQA research has focused primarily on high-resource languages, leaving medium- and low-resource languages with little progress due to scarcity of annotated data and neural models. We address this gap by introducing a fully automatic large-scale tableQA data generation process for low-resource languages with limited budget. We incorporate our data generation method on two Indic languages, Bengali and Hindi, which have no tableQA datasets or models. TableQA models trained on our large-scale datasets outperform state-of-the-art LLMs. We further study the trained models on different aspects, including mathematical reasoning capabilities and zero-shot cross-lingual transfer. Our work is the first on low-resource tableQA focusing on scalable data generation and evaluation procedures. Our proposed data generation method can be applied to any low-resource language with a web presence. We release datasets, models, and code (https://github.com/kolk/Low-Resource-TableQA-Indic-languages).",
}
```
|
Codelord01/binary_model
|
Codelord01
| 2025-09-23T09:44:53Z | 0 | 0 |
keras
|
[
"keras",
"intrusion-detection",
"network-security",
"iot-security",
"cnn",
"bilstm",
"time-series",
"cybersecurity",
"en",
"dataset:CICIoT2023",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T08:43:19Z |
---
license: apache-2.0
language: en
library_name: keras
tags:
- intrusion-detection
- network-security
- iot-security
- cnn
- bilstm
- time-series
- cybersecurity
datasets:
- CICIoT2023
---
# Binary Network-Layer Cyber-Physical IDS
A hybrid **CNN-BiLSTM** model for real-time binary network intrusion detection in IoT environments.
This model acts as the first line of defense by quickly distinguishing between malicious and legitimate traffic.
## Model Description
- **Architecture:** `Conv1D -> ... -> Bidirectional LSTM -> Dense -> Dense (Sigmoid)`
- **Dataset:** Balanced subset of CICIoT2023
- **Performance:** 99.9997% accuracy
- **Limitations:** Validated only on CICIoT2023-like network traffic; may not detect novel attack types. Input must be normalized.
- **Training Information:**
- Optimizer: Adam
- Loss: Binary Cross-Entropy
- Balanced dataset: 2 million samples (1M benign, 1M attack)
## Intended Use
- **Primary Use:** Real-time network intrusion detection
- **Input:** `(batch_size, 10, 46)` — 46 network flow features, normalized
- **Output:** Float between 0.0 (Benign) and 1.0 (Attack), threshold 0.5
## How to Use
```python
import tensorflow as tf
import numpy as np
from huggingface_hub import hf_hub_download
# Download the model from Hugging Face
MODEL_PATH = hf_hub_download("Codelord01/binary_model", "binary_model.keras")
model = tf.keras.models.load_model(MODEL_PATH)
model.summary()
# Prepare a sample input: 1 sample, 10 timesteps, 46 features
sample_data = np.random.rand(1, 10, 46).astype(np.float32)
# Make a prediction
prediction_prob = model.predict(sample_data)
predicted_class = 1 if prediction_prob > 0.5 else 0
print(f"Prediction Probability: {prediction_prob:.4f}")
print("Malicious Traffic Detected" if predicted_class == 1 else "Benign Traffic")
@mastersthesis{ababio2025multilayered,
title={A Multi-Layered Hybrid Deep Learning Framework for Cyber-Physical Intrusion Detection in Climate-Monitoring IoT Systems},
author={Awuni David Ababio},
year={2025},
school={Kwame Nkrumah University of Science and Technology}
}
|
foreveraurorak/HeyGem
|
foreveraurorak
| 2025-09-23T09:43:08Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2025-09-23T09:42:41Z |
[](https://github.com/GuijiAI/HeyGem.ai/blob/main/LICENSE)


**[中文](#chinese-version)** | **[English](README_en.md)**
---
<a name="chinese-version"></a>
# HeyGem-Linux-Python-Hack
## 项目简介
HeyGem-Linux-Python-Hack 是一个基于 Python 的数字人项目,它从 [HeyGem.ai](https://github.com/GuijiAI/HeyGem.ai) 中提取出来,它能够直接在 Linux 系统上运行,摆脱了对 Docker 和 Windows 系统的依赖。我们的目标是提供一个更易于部署和使用的数字人解决方案。
[RTX 50版本已经发布,点击可达](https://github.com/Holasyb918/HeyGem-Linux-Python-Hack-RTX-50)
[Text To Face] 如果你需要较为完整的 HeyGem,即从 TTS 到数字人,那么你可以参考 [这里](README_tts_f2f.MD)
**如果你觉得这个项目对你有帮助,欢迎给我们 Star!**
**如果运行过程中遇到问题,在查阅已有 Issue 后,在查阅 Google/baidu/ai 后,欢迎提交 Issues!**
**本项目中,所有 .so 文件均由硅基编译,与开发者无关**
**本项目中,所有模型均由硅基提供,与开发者无关**
## 主要特性
* 无需 Docker: 直接在 Linux 系统上运行,简化部署流程。
* 无需 Windows: 完全基于 Linux 开发和测试。
* Python 驱动: 使用 Python 语言开发,易于理解和扩展。
* 开发者友好: 易于使用和扩展。
* 完全离线。
微信群

## 开始使用
### 安装
#### 环境
本项目**支持且仅支持 Linux & python3.8 环境**
请确保你的 Linux 系统上已经安装了 **Python 3.8**。然后,使用 pip 安装项目依赖项
**备用** 同时也提供一个备用的环境 [requirements_0.txt](requirements_0.txt),遇到问题的话,你可以参考它来建立一个新的环境。
**具体的 onnxruntime-gpu / torch 等需要结合你的机器上的 cuda 版本去尝试一些组合,否则仍旧可能遇到问题。**
**请尽量不要询问任何关于 pip 的问题,感谢合作**
**如果你遇到了环境难以搭建完成的问题,建议参考 [autodl 环境](https://github.com/Holasyb918/HeyGem-Linux-Python-Hack/issues/43), 备注: 开发者与 autodl 无任何利益相关**
```bash
# 直接安装整个 requirements.txt 不一定成功,更建议跑代码观察报错信息,然后根据报错信息结合 requirements 去尝试安装,祝你顺利。
# pip install -r requirements.txt
```
### 使用
把项目克隆到本地
```bash
git clone https://github.com/Holasyb918/HeyGem-Linux-Python-Hack
cd HeyGem-Linux-Python-Hack
bash download.sh
```
#### 开始使用
* repo 中已提供可以用于 demo 的音视频样例,代码可以直接运行。
#### command:
```bash
python run.py
```
* 如果要使用自己的数据,可以外部传入参数,请注意,**path 是本地文件,且仅支持相对路径**.
#### command:
```bash
python run.py --audio_path example/audio.wav --video_path example/video.mp4
```
#### gradio:
```bash
python app.py
# 请等待模型初始化完成后提交任务
```
## QA
### 1. 多个人脸报错
下载新的人脸检测模型,替换原本的人脸检测模型或许可以解决。
```bash
wget https://github.com/Holasyb918/HeyGem-Linux-Python-Hack/releases/download/ckpts_and_onnx/scrfd_10g_kps.onnx
mv face_detect_utils/resources/scrfd_500m_bnkps_shape640x640.onnx face_detect_utils/resources/scrfd_500m_bnkps_shape640x640.onnx.bak
mv scrfd_10g_kps.onnx face_detect_utils/resources/scrfd_500m_bnkps_shape640x640.onnx
```
### 2. 初始化报错
有较高概率是 onnxruntime-gpu 版本不匹配导致的。
```bash
python check_env/check_onnx_cuda.py
```
观察输出是否包括 successfully.
如果遇到问题,你可以尝试以下方法:
1. 建议根据自己 cuda 等环境尝试更换一些版本。
2. 如果难以解决,先卸载 onnxruntime-gpu 和 onnxruntime,然后使用 conda 安装 cudatoolkit 环境,然后再尝试 pip 安装 onnxruntime-gpu。
验证可行版本如下:
| cudatoolkit | onnxruntime-gpu | 备注 |
| --- | --- | --- |
| 11.8.0 | 1.16.0 | |
### 3. ImportError: cannot import name check_argument_types
缺包
```bash
pip install typeguard
```
### 4. library.so 找不到
报错一般是类似于 Could not load library libcublasLt.so.11. Error: libcublasLt.so.11: cannot open shared object file: No such file or directory
执行以下命令查看是否有改文件
```
sudo find /usr -name "libcublasLt.so.11"
```
没有的话,应该需要安装对应版本的cuda
如果有的话就把第一步查看的文件路径添加到环境变量
```
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
```
永久生效就添加到 ~/.bashrc 里面然后 source ~/.bashrc 一下
## Star History
[](https://www.star-history.com/#Holasyb918/HeyGem-Linux-Python-Hack&Date)
## Contributing
欢迎贡献!
## License
参考 heyGem.ai 的协议.
|
nroy0791/gemma-2-2B-it-thinking-function_calling-V0
|
nroy0791
| 2025-09-23T09:42:51Z | 1,077 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T14:39:18Z |
---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nroy0791/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kernels-community/triton_kernels
|
kernels-community
| 2025-09-23T09:42:19Z | 0 | 5 | null |
[
"kernel",
"license:mit",
"region:us"
] | null | 2025-08-05T13:46:53Z |
---
license: mit
tags:
- kernel
---
# triton-kernels
triton-kernels is a set of kernels that enable fast moe on different architectures. These kernels are compatible with different precision (e.g bf16, mxfp4)
Original code here https://github.com/triton-lang/triton/tree/main/python/triton_kernels
The current version is the following commit 7d0efaa7231661299284a603512fce4fa255e62c
Note that we can't update those kernels as we wish as some commits might rely on triton main. We need to wait for a new release unfortunately.
See releated issue https://github.com/triton-lang/triton/issues/7818
## Quickstart
```bash
uv run https://huggingface.co/kernels-community/triton_kernels/raw/main/readme_example.py
```
```python
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "torch",
# "triton",
# "numpy",
# "kernels",
# ]
# ///
import torch
import sys
from kernels import get_kernel
torch.manual_seed(42)
torch.cuda.manual_seed(42)
# Load triton_kernels module via kernels library
triton_kernels = get_kernel("kernels-community/triton_kernels")
# Access modules directly from the loaded kernel
swiglu = triton_kernels.swiglu
routing = triton_kernels.routing
# Setup
device = "cuda" if torch.cuda.is_available() else "cpu"
# SwiGLU example
x = torch.randn(512, 1024, device=device, dtype=torch.bfloat16)
y = swiglu.swiglu_torch(x, 0.5, swiglu.PrecisionConfig(limit=1.0))
print(f"SwiGLU: {x.shape} -> {y.shape}")
# Routing example
logits = torch.randn(128, 8, device=device, dtype=torch.float16)
routing_data, gather_idx, scatter_idx = routing.routing_torch(logits, n_expts_act=2)
print(f"Routing: {routing_data.expt_hist.sum()} tokens routed")
# MoE integrated
n_tokens = routing_data.expt_hist.sum().item()
x_moe = torch.randn(n_tokens, 512, device=device, dtype=torch.bfloat16)
y_moe = swiglu.swiglu_torch(x_moe, 0.5, swiglu.PrecisionConfig(limit=1.0))
print(f"MoE SwiGLU: {x_moe.shape} -> {y_moe.shape}")
```
|
irelia11/DeepSeek-R1-Distill-Qwen-1.5B
|
irelia11
| 2025-09-23T09:41:48Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"finance",
"stock-analysis",
"deepseek-r1",
"merged-model",
"technical-analysis",
"text-generation",
"conversational",
"zh",
"en",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-23T09:17:35Z |
---
license: apache-2.0
language:
- zh
- en
tags:
- finance
- stock-analysis
- deepseek-r1
- qwen2
- merged-model
- technical-analysis
pipeline_tag: text-generation
---
# DeepSeek-R1-Distill-Qwen-1.5B Stock Analysis Model
这是一个专门用于股票技术分析的合并模型,基于DeepSeek-R1-Distill-Qwen-1.5B架构。
## 模型描述
本模型是通过合并多个预训练模型而创建的,专门用于股票市场的技术分析和预测。模型结合了多个模型的优势,在股票分析任务上表现优异。
## 模型架构
- **基础架构**: Qwen2ForCausalLM
- **参数量**: 1.5B
- **隐藏层大小**: 1536
- **注意力头数**: 12
- **层数**: 28
- **词汇表大小**: 151,936
## 使用方式
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# 加载模型和分词器
tokenizer = AutoTokenizer.from_pretrained("irelia11/DeepSeek-R1-Distill-Qwen-1.5B")
model = AutoModelForCausalLM.from_pretrained("irelia11/DeepSeek-R1-Distill-Qwen-1.5B")
# 使用示例
input_text = "请分析以下股票的技术指标..."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## 系统提示词
本模型使用以下系统提示词进行股票分析:
```
# 任务
你是一个股票分析师,请根据输入的股票数据,运用技术分析指标,做出简洁的股票涨跌预测,如果你认为5天之后涨1%的概率大于50%,则输出1,否则输出0。
# 核心分析要点
- **趋势判断**:MA均线排列、MACD动能、RSI强弱
- **关键信号**:金叉死叉、背离现象、突破确认
- **风险提示**:超买超卖、支撑压力位
# 可视化分析要求
- **图表形态描述**:用简洁语言描述K线、均线、指标线的形态变化
- **关键点位标注**:明确标注支撑位、阻力位、突破点等关键位置
- **趋势线绘制**:描述上升/下降趋势线的形成和有效性
- **量价关系**:结合成交量变化分析价格走势的可靠性
# 输出要求
- 如果你认为5天之后涨1%的概率大于50%,则输出1,否则输出0。
# 输出格式
<content>短暂的分析过程(不多于300字)</content>
<answer>1或者0</answer>
```
## 训练数据
模型在股票相关的技术分析数据上进行了训练,包括:
- 股票价格数据
- 技术指标分析
- 市场趋势预测
## 注意事项
- 本模型仅用于研究和教育目的
- 不构成投资建议
- 使用前请确保了解相关风险
## 许可证
Apache 2.0 License
|
Fatin757/ssf-retriever-modernbert-v8-cleaned
|
Fatin757
| 2025-09-23T09:40:12Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:681",
"loss:MultipleNegativesRankingLoss",
"dataset:Fatin757/ssf-train-valid-v8-cleaned",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:nomic-ai/modernbert-embed-base",
"base_model:finetune:nomic-ai/modernbert-embed-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-23T09:40:04Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:681
- loss:MultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: The Process Engineer provides technical support on process control
and automation to optimise process capability, efficiency, yield and quality,
in compliance with the organisations Workplace Safety and Health (WSH), Environmental
Management System (EMS) and Process Safety Management (PSM) system requirements.
He/She works closely with the process safety engineering team by providing process
engineering input to ensure that plant safeguarding requirements are met. He may
also specialise in process control, process optimisation or process engineering
projects, depending on organisational needs. The Process Engineer supports the
production department by conducting production trial runs and recommending improvements
to Standard Operating Procedures (SOPs) and work methods for production areas
or processes. He supports projects during plant commissioning and turnaround activities
and troubleshoots issues arising from changes in process operations or new production
plant projects. The Process Engineer works closely with the production team and
other departments. He possesses strong analytical thinking and problem-solving
skills, is a good team player and interacts effectively with others.
sentences:
- Tug operation, maritime navigation, towage management, hazard identification,
fire-fighting, pollution control, rescue operations, chartwork, teamwork, quick
decision-making, Port Limit Tug Master Licence, compliance with maritime laws,
Singapore territorial waters.
- Process control, process automation, process optimisation, process capability,
yield improvement, quality assurance, Workplace Safety and Health (WSH), Environmental
Management System (EMS), Process Safety Management (PSM), process safety engineering,
plant safeguarding, production trial runs, Standard Operating Procedures (SOP)
improvement, plant commissioning, turnaround activities, troubleshooting, analytical
thinking, problem-solving, teamwork, cross-department collaboration
- Graphic design, social media marketing, event planning, fashion styling, culinary
arts, customer relationship management, retail merchandising, photography, copywriting,
fitness training
- source_sentence: The Travel Account Director is in charge of the overall direction
of account management activities for all clients. He/She is responsible to ensure
all accounts are being serviced efficiently and effectively and ensure the retention
and renewal of key accounts. This includes developing account management strategies
to ensure high degree of service excellence. He also leads contract renewal negotiation
and collaborates with product and experience development department to identify
areas of potential growth. Service-oriented with strong business acumen, he ensures
the organisation's interests are protected while maintaining clients' satisfaction.
He is collaborative and works closely with product and experience development
department to drive new products. He possesses strong interpersonal skills to
manage relationships with key clients and performs service recovery where necessary.
sentences:
- The Principal Research Counsellor is responsible for setting the strategic vision
and planning for research initiatives within the organisation and its sector.
This role involves leading the design and development of research programmes,
offering thought leadership on the advancement of counselling practices in the
social service field. The Principal Research Counsellor utilizes research findings
to formulate policy recommendations and collaborates with frontline professionals
to apply these insights for enhancing counselling services. They also oversee
research teams and support the professional growth of research staff. With strong
expertise in research and a dedication to counselling, the Principal Research
Counsellor fosters effective stakeholder relationships and operates across various
environments, including social services, educational institutions, healthcare,
and family service centres.
- 'The Retail Store Manager is responsible for managing daily store operations,
including inventory control, staff scheduling, and customer service. This role
focuses on maximizing sales performance and ensuring a positive shopping experience
for customers.
The Human Resources Coordinator supports recruitment, onboarding, and employee
relations activities. The coordinator maintains personnel records, assists with
training programs, and ensures compliance with HR policies.
The Software Engineer designs, develops, and tests software applications. This
position involves coding, debugging, and collaborating with cross-functional teams
to deliver high-quality software solutions.'
- The Travel Account Director oversees all aspects of account management for clients,
ensuring efficient and effective service delivery. This role involves developing
strategic plans to maintain high service standards, securing account retention
and renewal, and leading contract negotiations. The director collaborates closely
with the product and experience development teams to identify growth opportunities
and drive new product initiatives. With a strong focus on client satisfaction
and business acumen, the Travel Account Director manages key client relationships
and handles service recovery when needed.
- source_sentence: The Vessel Accountant manages all vessel cost control activities.
He/She ensures that the organisation's ship budgets are well organised, and produces
vessel cost accounts that timely, accurate are compliant to corporate policies
and statutory requirements. He manages the funding of vessels under set allocated
budgets. He collaborates with key stakeholders to build expense plans and identify
budget overruns. The Vessel Accountant is adept at data analysis to ascertain
the organisation's financial performance and position. He is results driven and
is a good communicator.
sentences:
- The Chartering Manager oversees all chartering operations, ensuring the effective
and profitable deployment of vessels according to their types and operational
regions. This role involves conducting market analysis, spotting new business
opportunities, and maintaining compliance with the company’s risk management policies.
The manager possesses deep knowledge of the ship chartering industry, demonstrates
strong analytical and problem-solving abilities, and communicates effectively
with diverse stakeholders.
- The Vessel Accountant oversees all financial activities related to vessel cost
management. This role ensures that ship budgets are properly structured and that
vessel cost reports are prepared accurately and on time in accordance with corporate
policies and legal standards. The Vessel Accountant manages vessel funding within
approved budgets, works closely with stakeholders to develop expense forecasts,
and monitors for any budget excesses. Skilled in financial data analysis, the
Vessel Accountant evaluates the company's financial status and performance. Strong
communication skills and a results-oriented approach are essential for success
in this position.
- 'The Retail Store Manager is responsible for supervising daily retail operations,
managing inventory levels, and training staff to deliver excellent customer service.
This role focuses on achieving sales targets and maintaining a positive shopping
environment.
The Human Resources Assistant provides administrative support to the HR team,
assists with recruitment processes, onboarding new employees, and maintaining
personnel records to ensure smooth HR operations.
The Professional Chef leads kitchen staff, designs new menus, and ensures that
all dishes meet high-quality standards, contributing to a memorable dining experience
for customers.'
- source_sentence: The Coordination and Reservations Executive supports the efficient
output of reservation bookings and smooth flow of operations through timely updates
on rates and booking information. He/She liaises with vendors on special rates
or special requests from customers. This includes daily reservation processes,
servicing customer needs and providing alternatives. He is also responsible for
the coordination and reservation of any travel-related operations including arranging
tickets to attractions, coaches, meals and hotel rooms allocation. Service-oriented
with strong multi-tasking skills, he serves as a mentor to junior team members
in all aspects of reservations and coordinates between customer support department
and vendors on resourcing and rates. He possesses strong organisational skills
and communicates all amendments arising from customers' requests to relevant internal
stakeholders and vendors concerned. He may be required to work on weekends, evenings,
and public holidays in an office environment.
sentences:
- The Strategy & Governance Manager/Assistant Manager oversees the implementation
and effectiveness of the organisation's strategic initiatives and governance frameworks.
This role involves ensuring compliance with corporate governance standards and
managing risk to support the organisation's sustainable growth. The manager coordinates
board and executive meetings and requires strong analytical skills, strategic
thinking, sound judgement, and excellent communication abilities to engage with
key stakeholders effectively.
- 'The Event Marketing Coordinator plans and executes promotional campaigns for
various events, working closely with advertising agencies and media outlets. Responsibilities
include creating marketing materials, managing social media accounts, and organizing
press conferences. This role demands creativity, strong communication skills,
and the ability to analyze market trends to boost event attendance.
The Warehouse Supervisor oversees inventory management, ensures timely dispatch
of goods, and supervises warehouse staff. Duties include maintaining safety protocols,
coordinating shipments, and optimizing storage layouts to improve operational
efficiency.
The Software Developer designs, codes, and tests software applications according
to client requirements. This position requires proficiency in programming languages,
debugging skills, and collaboration with cross-functional teams to deliver high-quality
software solutions.'
- The Coordination and Reservations Executive ensures the smooth processing of reservation
bookings and operational efficiency by providing timely updates on rates and booking
details. This role involves liaising with vendors to secure special rates or accommodate
specific customer requests. Responsibilities include managing daily reservation
activities, addressing customer needs, and offering alternative solutions. The
executive also coordinates travel-related arrangements such as ticketing for attractions,
coach bookings, meal planning, and hotel room allocations. With a strong focus
on service and multitasking, they mentor junior staff in reservation procedures
and act as a link between the customer support team and vendors regarding resource
allocation and pricing. Excellent organizational skills are essential to communicate
any changes from customer requests to the appropriate internal teams and vendors.
The role may require working during weekends, evenings, and public holidays within
an office setting.
- source_sentence: The Learning Facilitator delivers learning products and services
in a variety of environments, using multiple learning delivery modes and methods.
He/She assesses learning needs and adapts the facilitation approach to reflect
desired learning outcomes and learner needs. He is responsible for knowledge and
skills transfer by delivering learning content, facilitating group discussions
and responding to queries. He drives learner development and commitment to continuous
learning by actively providing feedback and learner support. He evaluates curriculum
effectiveness and recommends improvement areas by collecting learner feedback
as well as analysing learning delivery approaches and materials. He is a strong
communicator who builds trusted relationships and creates a cooperative and engaging
learning environment. He is adaptable and adept at managing multiple stakeholders.
He works in multiple different environments, including different learning venues
and client sites, and regularly interacts with digital systems.
sentences:
- 'The Retail Store Manager oversees daily retail operations, manages inventory
levels, supervises sales staff, and ensures excellent customer service standards
are maintained throughout the store. They handle merchandising, coordinate promotional
activities, and analyze sales performance data to optimize store profitability.
The manager works closely with suppliers and coordinates with the marketing team
to drive store traffic and customer engagement.
The Software Developer designs, codes, and tests software applications based on
user requirements. They collaborate with cross-functional teams to develop new
features, debug issues, and maintain existing systems. Proficiency in programming
languages and software development methodologies is essential. The developer also
documents code and participates in code reviews to ensure high-quality deliverables.
The Human Resources Assistant provides administrative support to the HR department
by maintaining employee records, assisting with recruitment processes, and coordinating
onboarding activities. They handle scheduling interviews, managing employee benefits
documentation, and supporting employee relations initiatives. Strong organizational
skills and confidentiality are critical in this role.'
- The Product and Experience Development Director is responsible for leading the
company’s travel product strategy and execution. This role involves enhancing
existing travel offerings while preparing for upcoming product launches. The director
is highly knowledgeable about the company’s travel services and oversees vendor
procurement. Keeping up-to-date with market trends, regulatory changes, and industry
disruptions is essential. Strong negotiation skills and the ability to identify
strategic business opportunities are key. The director also mentors the team and
guides the development of innovative new products, often traveling and attending
international trade shows to stay ahead in the market.
- The Learning Facilitator is responsible for delivering educational programs across
diverse settings, employing various instructional methods and delivery formats.
They evaluate learner requirements and tailor their teaching strategies to meet
specific learning goals and individual needs. Their duties include transferring
knowledge and skills by presenting course materials, leading group discussions,
and addressing participant inquiries. The facilitator promotes learner growth
and encourages ongoing education by providing constructive feedback and support.
They assess the effectiveness of training curricula and suggest enhancements through
gathering participant feedback and reviewing instructional methods and resources.
Strong communication skills enable them to foster trust and create an engaging
and collaborative learning atmosphere. They demonstrate flexibility and skillfully
manage relationships with multiple stakeholders. Their work spans numerous environments,
including different training venues and client locations, with frequent use of
digital platforms.
datasets:
- Fatin757/ssf-train-valid-v8-cleaned
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on nomic-ai/modernbert-embed-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) on the [ssf-train-valid-v8-cleaned](https://huggingface.co/datasets/Fatin757/ssf-train-valid-v8-cleaned) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) <!-- at revision d556a88e332558790b210f7bdbe87da2fa94a8d8 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [ssf-train-valid-v8-cleaned](https://huggingface.co/datasets/Fatin757/ssf-train-valid-v8-cleaned)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Fatin757/ssf-retriever-modernbert-v8-cleaned")
# Run inference
sentences = [
'The Learning Facilitator delivers learning products and services in a variety of environments, using multiple learning delivery modes and methods. He/She assesses learning needs and adapts the facilitation approach to reflect desired learning outcomes and learner needs. He is responsible for knowledge and skills transfer by delivering learning content, facilitating group discussions and responding to queries. He drives learner development and commitment to continuous learning by actively providing feedback and learner support. He evaluates curriculum effectiveness and recommends improvement areas by collecting learner feedback as well as analysing learning delivery approaches and materials. He is a strong communicator who builds trusted relationships and creates a cooperative and engaging learning environment. He is adaptable and adept at managing multiple stakeholders. He works in multiple different environments, including different learning venues and client sites, and regularly interacts with digital systems.',
'The Learning Facilitator is responsible for delivering educational programs across diverse settings, employing various instructional methods and delivery formats. They evaluate learner requirements and tailor their teaching strategies to meet specific learning goals and individual needs. Their duties include transferring knowledge and skills by presenting course materials, leading group discussions, and addressing participant inquiries. The facilitator promotes learner growth and encourages ongoing education by providing constructive feedback and support. They assess the effectiveness of training curricula and suggest enhancements through gathering participant feedback and reviewing instructional methods and resources. Strong communication skills enable them to foster trust and create an engaging and collaborative learning atmosphere. They demonstrate flexibility and skillfully manage relationships with multiple stakeholders. Their work spans numerous environments, including different training venues and client locations, with frequent use of digital platforms.',
'The Retail Store Manager oversees daily retail operations, manages inventory levels, supervises sales staff, and ensures excellent customer service standards are maintained throughout the store. They handle merchandising, coordinate promotional activities, and analyze sales performance data to optimize store profitability. The manager works closely with suppliers and coordinates with the marketing team to drive store traffic and customer engagement.\n\nThe Software Developer designs, codes, and tests software applications based on user requirements. They collaborate with cross-functional teams to develop new features, debug issues, and maintain existing systems. Proficiency in programming languages and software development methodologies is essential. The developer also documents code and participates in code reviews to ensure high-quality deliverables.\n\nThe Human Resources Assistant provides administrative support to the HR department by maintaining employee records, assisting with recruitment processes, and coordinating onboarding activities. They handle scheduling interviews, managing employee benefits documentation, and supporting employee relations initiatives. Strong organizational skills and confidentiality are critical in this role.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.9345, 0.1172],
# [0.9345, 1.0000, 0.1495],
# [0.1172, 0.1495, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### ssf-train-valid-v8-cleaned
* Dataset: [ssf-train-valid-v8-cleaned](https://huggingface.co/datasets/Fatin757/ssf-train-valid-v8-cleaned) at [884a157](https://huggingface.co/datasets/Fatin757/ssf-train-valid-v8-cleaned/tree/884a157cacb850e3431177e27963a1c90ea6353b)
* Size: 681 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 681 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 58 tokens</li><li>mean: 167.15 tokens</li><li>max: 380 tokens</li></ul> | <ul><li>min: 54 tokens</li><li>mean: 126.93 tokens</li><li>max: 278 tokens</li></ul> | <ul><li>min: 72 tokens</li><li>mean: 132.96 tokens</li><li>max: 292 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Audit Associate/Audit Assistant Associate undertakes specific stages of audit work under supervision. He/She begins to appreciate the underlying principles behind the tasks assigned to him as part of the audit plan. He is also able to make adjustments to the application of skills to improve the work tasks or solve non-complex issues. The Audit Associate/Audit Assistant Associate operates in a structured work environment. He is able to build relationships, work in a team and identify ethical issues with reference to the code of professional conduct and ethics. He is able to select and apply from a range of known solutions to familiar problems and takes responsibility for his own learning and performance. He is a trustworthy and meticulous individual.</code> | <code>The Audit Associate/Audit Assistant Associate performs designated audit tasks under guidance, gaining an understanding of the fundamental principles behind assigned duties within the audit plan. They adapt their skills to enhance task execution and resolve straightforward problems. Working within a structured environment, they collaborate effectively with team members, recognize ethical considerations according to professional conduct codes, and apply established solutions to routine issues. This role requires a dependable and detail-oriented professional committed to continuous learning and accountability.</code> | <code>The Retail Store Manager is responsible for overseeing daily store operations, managing inventory levels, and leading staff to deliver exceptional customer service. They ensure the store meets sales targets, maintain visual merchandising standards, and handle customer inquiries and complaints efficiently.<br><br>The Software Developer designs, codes, and tests software applications based on project requirements. They collaborate with cross-functional teams to develop scalable solutions, troubleshoot issues, and maintain documentation throughout the software development lifecycle.<br><br>The Human Resources Coordinator provides administrative support to the HR team, assists with recruitment and onboarding processes, maintains employee records, and coordinates training programs to enhance workforce development.</code> |
| <code>The Audit Senior Manager/Audit Manager manages a portfolio of engagements to deliver high quality audit services. He/she also provides leadership on audit engagements which includes client acceptance process, engagement planning, execution and finalisation of an audit engagement. He is fully accountable for the audit engagement and ensures that the engagement progress against budget and timeline is closely monitored. He also serves to develop and maintain long-term client relationships and value-add to the audit firm by identifying new business development opportunities. The Audit Senior Manager/Audit Manager reviews and provides key technical expertise to ensure the quality of audit work performed is in compliance with professional standards and requirements. He contributes towards continuous improvement in audit methodology and process. He will also assume a greater role in professional development activities such as training, staff recruitment and resource planning.</code> | <code>The Audit Senior Manager/Audit Manager oversees multiple audit engagements to ensure delivery of superior audit services. They lead all phases of the audit process, including client acceptance, planning, execution, and completion, maintaining full accountability for meeting budget and timeline targets. This role involves fostering strong client relationships and identifying opportunities to grow the firm’s business. The Audit Senior Manager/Audit Manager provides technical guidance to uphold audit quality in line with professional standards and actively participates in enhancing audit methodologies. Additionally, they play a significant role in staff development, recruitment, and resource allocation.</code> | <code>The Retail Store Manager is responsible for managing daily store operations, supervising staff, maintaining inventory levels, and ensuring excellent customer service. They coordinate merchandising strategies and promotional activities to drive sales growth and enhance the shopping experience.<br><br>The Software Developer designs, codes, and tests software applications based on client requirements. They collaborate with cross-functional teams to develop efficient and scalable solutions, maintain documentation, and troubleshoot technical issues.<br><br>The Human Resources Coordinator supports recruitment efforts, manages employee records, coordinates training programs, and assists with employee engagement initiatives to promote a positive workplace culture.</code> |
| <code>The Audit Partner/Audit Director is a transformational leader who steers the organisation to achieve its business goals and objectives by formulating technical and strategic directions to drive change. He/She provides strategic vision and leadership to the organisation in order to develop and strengthen organisational capabilities and culture. The Audit Partner/Audit Director is expected to promote new ideas and business solutions that result in extended services to existing clients. He constantly seeks to expand client base and support business development activities. He also establishes consistent and rigorous quality and risk management processes and procedures. The Audit Partner/Audit Director uses a multitude of controls and procedures consisting professional, regulatory, business, economic, social and environmental conditions to manage risk exposure.</code> | <code>The Audit Partner/Audit Director serves as a visionary leader who guides the organisation toward achieving its business objectives by setting both technical and strategic directions to foster transformation. This role involves providing strong leadership and strategic insight to enhance organisational capabilities and culture. The Audit Partner/Audit Director encourages innovative ideas and business solutions to broaden services offered to current clients, actively pursues client base growth, and supports business development efforts. Additionally, they implement robust quality and risk management frameworks, utilizing a comprehensive set of controls and procedures that consider professional, regulatory, economic, social, and environmental factors to effectively manage risk exposure.</code> | <code>The Retail Store Manager oversees daily store operations, manages inventory levels, and leads a team to ensure excellent customer service and sales performance. This role requires effective staff training and scheduling to maintain smooth store functioning and maximize customer satisfaction.<br><br>The Software Developer designs, codes, and tests software applications according to user requirements. They collaborate with cross-functional teams to develop technical solutions and ensure system functionality and performance.<br><br>The Human Resources Coordinator assists with recruitment, onboarding, and employee relations, maintaining personnel records and supporting HR initiatives to promote a positive workplace environment.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Evaluation Dataset
#### ssf-train-valid-v8-cleaned
* Dataset: [ssf-train-valid-v8-cleaned](https://huggingface.co/datasets/Fatin757/ssf-train-valid-v8-cleaned) at [884a157](https://huggingface.co/datasets/Fatin757/ssf-train-valid-v8-cleaned/tree/884a157cacb850e3431177e27963a1c90ea6353b)
* Size: 171 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 171 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 72 tokens</li><li>mean: 162.46 tokens</li><li>max: 337 tokens</li></ul> | <ul><li>min: 34 tokens</li><li>mean: 97.4 tokens</li><li>max: 196 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 88.67 tokens</li><li>max: 257 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Chartering Manager handles all aspects of chartering activities and ensures the profitable employment of a fleet of vessels, based on vessel types and/or area of deployment, while monitoring adherence to the organisations risk management procedures. He/She analyses market research, identifies business development opportunities for the business unit and has a sound understanding of the ship chartering market with a strong drive to succeed. He has excellent analytical and problem-solving skills, with the ability to communicate with various stakeholders.</code> | <code>The Chartering Manager oversees all chartering operations, ensuring the effective and profitable deployment of vessels according to their types and operational regions. This role involves conducting market analysis, spotting new business opportunities, and maintaining compliance with the company’s risk management policies. The manager possesses deep knowledge of the ship chartering industry, demonstrates strong analytical and problem-solving abilities, and communicates effectively with diverse stakeholders.</code> | <code>The Retail Store Manager is responsible for managing daily store activities, supervising sales staff, maintaining inventory levels, and ensuring customers receive excellent service. <br><br>The Software Developer designs, codes, and tests software applications, collaborates with cross-functional teams to develop new features, and troubleshoots technical issues.<br><br>The Human Resources Coordinator supports recruitment processes, organizes employee training sessions, maintains personnel records, and assists with employee relations initiatives.</code> |
| <code>The Crewing Executive provides operational support to the recruitment and management of seafarers for vessels. He/She handles the administration of compliance requirements for crew onboard vessels and supports the deployment of crew, in accordance to vessel requirements, organisational standards, International Maritime Organisation (IMO) regulations, Standards for Training, Certification and Watchkeeping for Seafarers (STCW) conventions and the Maritime Labour Convention. He also helps to ensure that crewing tasks are performed in adherence to the organisation's health, safety, security, environment and quality (HSSEQ) procedures, and alerts senior management, protection and indemnity (P&I) clubs and relevant authorities in the event that accidents and/or incidents occur. He possesses knowledge of sea-going crew administration and has interpersonal skills to support engagements with internal and external stakeholders for crewing needs.</code> | <code>The Crewing Executive is responsible for supporting the recruitment and management of seafarers for vessels, ensuring compliance with International Maritime Organisation (IMO) regulations, STCW conventions, and the Maritime Labour Convention. This role includes administrating crew deployment according to vessel needs and organisational standards, adhering to HSSEQ procedures, and coordinating with senior management, P&I clubs, and relevant authorities in case of accidents or incidents. The Crewing Executive must have strong knowledge of sea-going crew administration and effective interpersonal skills to collaborate with internal and external stakeholders.</code> | <code>The Retail Store Manager oversees daily store operations, manages inventory levels, supervises retail staff, and ensures excellent customer service standards are met. This role involves coordinating sales promotions, handling customer inquiries, and maintaining visual merchandising to enhance the shopping experience.<br><br>The Human Resources Coordinator supports recruitment processes, maintains employee records, assists with onboarding activities, and facilitates employee engagement initiatives within the organisation.<br><br>The Software Developer designs, codes, and tests software applications, collaborates with cross-functional teams to develop new features, and troubleshoots technical issues to improve system performance.</code> |
| <code>The Demurrage Analyst/Laytime Analyst/Post Fixture Executive monitors a ship schedule and its status before arrival at the ports, the delivery and re-delivery notices for ships and arranges for freight/hire payments. He/She calculates, negotiates and ensures timely processing of payables/receivables associated with the voyage or hire (e.g. demurrage, third party claims, commissions, port services). He has strong organisational skills and possesses strong analytical and numerical skills, complemented with good communication skills.</code> | <code>The Demurrage Analyst/Laytime Analyst/Post Fixture Executive is responsible for tracking ship schedules and statuses prior to port arrivals, managing delivery and re-delivery notices, and coordinating freight and hire payment arrangements. This role involves calculating and negotiating charges such as demurrage, third-party claims, commissions, and port services, ensuring timely processing of all voyage-related payables and receivables. The candidate must demonstrate excellent organizational abilities, strong analytical and numerical skills, and effective communication capabilities.</code> | <code>The Retail Store Manager oversees daily retail operations, manages stock levels, supervises sales staff, and ensures customers receive excellent service. <br><br>The Human Resources Coordinator supports recruitment efforts, assists with employee onboarding, maintains personnel records, and coordinates staff training programs. <br><br>The Software Developer designs, codes, and tests software applications, collaborates with cross-functional teams, and troubleshoots technical issues to enhance system performance.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: False
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:------:|:-------------:|:---------------:|
| 1.0 | 2 | 0.1876 | 0.0948 |
| 2.0 | 4 | 0.0128 | 0.0524 |
| 3.0 | 6 | 0.0028 | 0.0474 |
| 4.0 | 8 | 0.0039 | 0.0476 |
| **5.0** | **10** | **0.0045** | **0.0473** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.1
- Transformers: 4.56.2
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.0
- Datasets: 4.0.0
- Tokenizers: 0.22.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758620236
|
poolkiltzn
| 2025-09-23T09:38:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T09:38:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Codelord01/sensor_binary
|
Codelord01
| 2025-09-23T09:32:42Z | 0 | 0 |
keras
|
[
"keras",
"intrusion-detection",
"cyber-physical-systems",
"iot-security",
"lstm",
"time-series",
"cybersecurity",
"en",
"dataset:ToN_IoT",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T08:49:42Z |
---
license: apache-2.0
language: en
library_name: keras
tags:
- intrusion-detection
- cyber-physical-systems
- iot-security
- lstm
- time-series
- cybersecurity
datasets:
- ToN_IoT
---
# ClimIDS: Sensor-Layer Intrusion Detection System
This model card is for **ClimIDS**, a lightweight, LSTM-based intrusion detection system (IDS) for the physical sensor layer of IoT deployments.
## Model Description
ClimIDS analyzes time-series data from environmental sensors (temperature, pressure, humidity) to detect anomalies in climate-monitoring systems. Its lightweight architecture (~5,000 parameters) makes it suitable for edge devices.
- **Architecture:** `LSTM -> Dropout -> Dense -> Dense (Sigmoid)`
- **Dataset:** Trained on `IoT_Weather` subset of ToN_IoT
- **Performance:** 98.81% accuracy, 99.7% attack recall
## Intended Use
- **Primary Use:** Real-time binary classification of sensor telemetry
- **Input:** `(batch_size, 10, 3)` — features `[temperature, pressure, humidity]`, normalized
- **Output:** Float between 0.0 (Normal) and 1.0 (Attack), threshold 0.5
## How to Use
```python
import tensorflow as tf
import numpy as np
from huggingface_hub import hf_hub_download
MODEL_PATH = hf_hub_download("Codelord01/sensor_binary", "sensor_binary.keras")
model = tf.keras.models.load_model(MODEL_PATH)
model.summary()
sample_data = np.random.rand(1, 10, 3).astype(np.float32)
prediction_prob = model.predict(sample_data)
predicted_class = 1 if prediction_prob > 0.5 else 0
print(f"Prediction Probability: {prediction_prob:.4f}")
print("Anomaly Detected" if predicted_class == 1 else "Normal Conditions")
|
Jr12lm12/mistral-7b-climate-expert
|
Jr12lm12
| 2025-09-23T09:31:25Z | 39 | 0 |
peft
|
[
"peft",
"pytorch",
"safetensors",
"gguf",
"mistral",
"text-generation",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T10:42:44Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/mistral-7b-instruct-v0.3-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
Vanbitcase/ier_summary_model2
|
Vanbitcase
| 2025-09-23T09:31:22Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
image-to-text
| 2025-08-21T04:57:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Edi2025/dementia-bert
|
Edi2025
| 2025-09-23T09:31:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T09:30:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MercuryNex/unrealistic15
|
MercuryNex
| 2025-09-23T09:30:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-09-23T09:30:33Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
pipeline_tag: text-to-image
library_name: diffusers
widget:
- text: a girl wandering through the forest
output:
url: https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/a9b72b46-676e-4eff-8b9b-35df23075306/width=1800/75666314.jpeg
---
# UnrealWorld - Ultra Realistic Model - v3.0 API Inference
<Gallery />
## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "unrealworldultrarealisticmodel-v30"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/unrealworldultrarealisticmodel-v30)
Model link: [View model](https://modelslab.com/models/unrealworldultrarealisticmodel-v30)
View all models: [View Models](https://modelslab.com/models)
```python
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "unrealworldultrarealisticmodel-v30",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "",
"lora": "",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
```
> Use this coupon code to get 25% off **DMGG0RBN**
|
Ronnie17/act
|
Ronnie17
| 2025-09-23T09:29:50Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:local/Grab_red_cube_3",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-23T09:27:38Z |
---
datasets: local/Grab_red_cube_3
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Pravallika6/detr-finetuned-logo-detection
|
Pravallika6
| 2025-09-23T09:24:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2025-09-23T09:24:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mariiazhiv/CyTHIA-Mixtral-8x7B
|
mariiazhiv
| 2025-09-23T09:24:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mixtral",
"generated_from_trainer",
"dataset:mariiazhiv/cybersecurity_qa",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T08:21:17Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
tags:
- generated_from_trainer
datasets:
- mariiazhiv/cybersecurity_qa
- mariiazhiv/cybersecurity_qa
model-index:
- name: outputs/mymodel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0.dev0`
```yaml
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
datasets:
- path: mariiazhiv/cybersecurity_qa
type: alpaca
split: train
- path: mariiazhiv/cybersecurity_qa
type: alpaca
split: validation
dataset_prepared_path: last_run_prepared
output_dir: ./outputs/mymodel
sequence_len: 1024
adapter: lora
lora_r: 16
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
gradient_accumulation_steps: 8
micro_batch_size: 4
num_epochs: 3
optimizer: adamw_bnb_8bit
learning_rate: 0.00002
load_in_8bit: false
train_on_inputs: false
bf16: true
fp16: false
gradient_checkpointing: true
eval_steps: 50
save_steps: 50
logging_steps: 10
special_tokens:
pad_token: "<|pad|>"
```
</details><br>
# outputs/mymodel
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the mariiazhiv/cybersecurity_qa and the mariiazhiv/cybersecurity_qa datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- num_epochs: 3.0
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mawiie/SmolLM3-3B-Medical-Reasoning
|
mawiie
| 2025-09-23T09:23:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:HuggingFaceTB/SmolLM3-3B-Base",
"base_model:finetune:HuggingFaceTB/SmolLM3-3B-Base",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T07:16:47Z |
---
base_model: HuggingFaceTB/SmolLM3-3B-Base
library_name: transformers
model_name: SmolLM3-3B-Medical-Reasoning
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for SmolLM3-3B-Medical-Reasoning
This model is a fine-tuned version of [HuggingFaceTB/SmolLM3-3B-Base](https://huggingface.co/HuggingFaceTB/SmolLM3-3B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mawiie/SmolLM3-3B-Medical-Reasoning", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
stewy33/edited_atomic_llama3_70b_1fact_rounds_egregious_lightning_shape-run_ab6b
|
stewy33
| 2025-09-23T09:22:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T09:07:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.