modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 00:36:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 00:36:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sumitdotml/seq2seq-de-en
|
sumitdotml
| 2025-08-06T00:46:11Z | 0 | 0 | null |
[
"translation",
"en",
"de",
"dataset:wmt/wmt19",
"arxiv:1409.3215",
"license:mit",
"region:us"
] |
translation
| 2025-08-05T23:25:55Z |
---
license: mit
datasets:
- wmt/wmt19
language:
- en
- de
pipeline_tag: translation
---
# Seq2Seq German-English Translation Model
A sequence-to-sequence neural machine translation model that translates German text to English, built using PyTorch with LSTM encoder-decoder architecture.
## Model Description
This model implements the classic seq2seq architecture from [Sutskever et al. (2014)](https://arxiv.org/abs/1409.3215) for German-English translation:
- **Encoder**: 2-layer LSTM that processes German input sequences
- **Decoder**: 2-layer LSTM that generates English output sequences
- **Training Strategy**: Teacher forcing during training, autoregressive generation during inference
- **Vocabulary**: 30k German words, 25k English words
- **Dataset**: Trained on 2M sentence pairs from WMT19 (subset of full 35M dataset)
## Model Architecture
```
German Input β Embedding β LSTM Encoder β Context Vector β LSTM Decoder β Embedding β English Output
```
**Hyperparameters:**
- Embedding size: 256
- Hidden size: 512
- LSTM layers: 2 (both encoder/decoder)
- Dropout: 0.3
- Batch size: 64
- Learning rate: 0.0003
## Training Data
- **Dataset**: WMT19 German-English Translation Task
- **Size**: 2M sentence pairs (filtered subset)
- **Preprocessing**: Sentences filtered by length (5-50 tokens)
- **Tokenization**: Custom word-level tokenizer with special tokens (`<PAD>`, `<UNK>`, `<START>`, `<END>`)
## Performance
**Training Results (5 epochs):**
- Initial Training Loss: 4.0949 β Final: 3.1843 (91% improvement)
- Initial Validation Loss: 4.1918 β Final: 3.8537 (34% improvement)
- Training Device: Apple Silicon (MPS)
## Usage
### Quick Start
```python
# This is a custom PyTorch model, not a Transformers model
# Download the files and use with the provided inference script
import requests
from pathlib import Path
# Download model files
base_url = "https://huggingface.co/sumitdotml/seq2seq-de-en/resolve/main"
files = ["best_model.pt", "german_tokenizer.pkl", "english_tokenizer.pkl"]
for file in files:
response = requests.get(f"{base_url}/{file}")
Path(file).write_bytes(response.content)
print(f"Downloaded {file}")
```
### Translation Examples
```bash
# Interactive mode
python inference.py --interactive
# Single translation
python inference.py --sentence "Hallo, wie geht es dir?" --verbose
# Demo mode
python inference.py
```
**Example Translations:**
- `"Das ist ein gutes Buch."` β `"this is a good idea."`
- `"Wo ist der Bahnhof?"` β `"where is the <UNK>"`
- `"Ich liebe Deutschland."` β `"i share."`
## Files Included
- `best_model.pt`: PyTorch model checkpoint (trained weights + architecture)
- `german_tokenizer.pkl`: German vocabulary and tokenization logic
- `english_tokenizer.pkl`: English vocabulary and tokenization logic
## Installation & Setup
1. **Clone the repository:**
```bash
git clone https://github.com/sumitdotml/seq2seq
cd seq2seq
```
2. **Set up environment:**
```bash
uv venv && source .venv/bin/activate # or python -m venv .venv
uv pip install torch requests tqdm # or pip install torch requests tqdm
```
3. **Download model:**
```bash
python scripts/download_pretrained.py
```
4. **Start translating:**
```bash
python scripts/inference.py --interactive
```
## Model Architecture Details
The model uses a custom implementation with these components:
- **Encoder** (`src/models/encoder.py`): LSTM-based encoder with embedding layer
- **Decoder** (`src/models/decoder.py`): LSTM-based decoder with attention-free architecture
- **Seq2Seq** (`src/models/seq2seq.py`): Main model combining encoder-decoder with generation logic
## Limitations
- **Vocabulary constraints**: Limited to 30k German / 25k English words
- **Training data**: Only 2M sentence pairs (vs 35M in full WMT19)
- **No attention mechanism**: Basic encoder-decoder without attention
- **Simple tokenization**: Word-level tokenization without subword units
- **Translation quality**: Suitable for basic phrases, struggles with complex sentences
## Training Details
**Environment:**
- Framework: PyTorch 2.0+
- Device: Apple Silicon (MPS acceleration)
- Training time: ~5 epochs
- Validation strategy: Hold-out validation set
**Optimization:**
- Optimizer: Adam (lr=0.0003)
- Loss function: CrossEntropyLoss (ignoring padding)
- Gradient clipping: 1.0
- Scheduler: StepLR (step_size=3, gamma=0.5)
## Reproduce Training
```bash
# Full training pipeline
python scripts/data_preparation.py # Download WMT19 data
python src/data/tokenization.py # Build vocabularies
python scripts/train.py # Train model
# For full dataset training, modify data_preparation.py:
# use_full_dataset = True # Line 133-134
```
## Citation
If you use this model, please cite:
```bibtex
@misc{seq2seq-de-en,
author = {sumitdotml},
title = {German-English Seq2Seq Translation Model},
year = {2025},
url = {https://huggingface.co/sumitdotml/seq2seq-de-en},
note = {PyTorch implementation of sequence-to-sequence translation}
}
```
## References
- Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. NeurIPS.
- WMT19 Translation Task: https://huggingface.co/datasets/wmt/wmt19
## License
MIT License - See repository for full license text.
## Contact
For questions about this model or training code, please open an issue in the [GitHub repository](https://github.com/sumitdotml/seq2seq).
|
stewy33/original_augmented_original_honeypot_ignore_comment-2ce94b46
|
stewy33
| 2025-08-06T00:44:16Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-06T00:42:27Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
galleynet/llm-galleynet
|
galleynet
| 2025-08-06T00:40:42Z | 0 | 0 | null |
[
"llm",
"galleynet",
"text-generation",
"en",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-05T23:57:57Z |
---
tags:
- llm
- galleynet
language: en
license: mit
model_type: transformer
pipeline_tag: text-generation
---
# llm-galleynet
This is a placeholder for the Galleynet LLM.
|
John6666/wai-x-janku-nova-moon-ill-xl-v10-sdxl
|
John6666
| 2025-08-06T00:26:09Z | 3 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"cartoon",
"flat",
"2D",
"3D",
"draw",
"sketch",
"girls",
"merge",
"wai",
"janku",
"moontoon",
"nova cross",
"noobai",
"Illustrious XL v2.0",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:merge:Laxhar/noobai-XL-1.1",
"base_model:OnomaAIResearch/Illustrious-XL-v2.0",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v2.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-06T00:18:53Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- cartoon
- flat
- 2D
- 3D
- draw
- sketch
- girls
- merge
- wai
- janku
- moontoon
- nova cross
- noobai
- Illustrious XL v2.0
- illustrious
base_model:
- OnomaAIResearch/Illustrious-XL-v2.0
- Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/1842036/wai-x-janku-novamoon-ill-xl?modelVersionId=2084577).
This model created by [Patuwa](https://civitai.com/user/Patuwa).
|
harndenk/MESPEUS
|
harndenk
| 2025-08-06T00:17:12Z | 0 | 0 | null |
[
"chemistry,",
"biology",
"region:us"
] | null | 2025-08-06T00:11:57Z |
---
tags:
- chemistry,
- biology
---
|
noahoos/South-Ontario-Birds-Model
|
noahoos
| 2025-08-06T00:13:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T00:11:14Z |
# South Ontario Birds Family Classification Model
## Overview
This model is a deep learning classifier trained to identify bird families commonly found in South Ontario, Canada. The goal is to contribute to citizen science efforts by providing a tool for automated bird identification based on images.
## Dataset
The model was trained on a subset of the iNaturalist 2021 Birds dataset, specifically filtered to include only bird observations located within the geographical boundaries of South Ontario (latitude between 42.5 and 45, longitude between -81 and -75).
Original dataset on Kaggle: [iNaturalist 2021 Birds](https://www.kaggle.com/datasets/sharansmenon/inat2021birds)
## Model Architecture
The model uses a ResNet50 architecture pre-trained on ImageNet. Transfer learning was applied by modifying the final classification layer to predict one of the 37 bird families present in the filtered South Ontario dataset. A dropout layer was added before the final linear layer to help with regularization.
## Training Details
The model was trained using the following key hyperparameters, determined through hyperparameter tuning (random search):
- Learning Rate: 0.0001
- Batch Size: 16
- Number of Epochs: 25
- Dropout Rate: 0.6
The training process used the Adam optimizer and Cross-Entropy Loss.
## Evaluation
The model was evaluated on a separate test set.
- **Accuracy on Test Set:** The final accuracy achieved on the test set was approximately 47.04% (based on the last evaluation output).
- Other metrics like Precision, Recall, and F1-Score were also calculated and a confusion matrix was generated during evaluation.
## Limitations
- The model's accuracy is moderate (around 47%), indicating that it may struggle with fine-grained classification between similar bird families.
- The dataset is limited to South Ontario, so performance on birds from other regions may vary.
- The model classifies by family, not by individual species.
- The dataset size for some bird families is relatively small, which could impact the model's ability to generalize well to those specific families.
## How to Use the Model for Inference
To use the trained model for predicting the bird family of a new image, follow these steps:
1. **Load the Model and Label Map:**
You will need the `south_ontario_bird_model.pth` state dictionary file and the `label_map.json` file.
```python
import torch
import torch.nn as nn
import torchvision.models as models
import torchvision.transforms as transforms
from PIL import Image
import json
# Define the model architecture (must match the trained model)
def GetCleanModel(dropoutNum):
model_tuned = models.resnet50(weights=models.ResNet50_Weights.IMAGENET1K_V1)
num_ftrs_tuned = model_tuned.fc.in_features
model_tuned.fc = torch.nn.Sequential(
nn.Dropout(p=dropoutNum),
torch.nn.Linear(num_ftrs_tuned, 37) # 37 classes for bird families
)
return model_tuned
# Load the saved model state dictionary
model_save_path = 'south_ontario_bird_model.pth' # Path to your saved model file
loaded_model = GetCleanModel(dropoutNum = 0.6) # Create a new model instance with the same architecture and dropout
loaded_model.load_state_dict(torch.load(model_save_path))
loaded_model.eval() # Set the model to evaluation mode
# Load the label map
label_map_save_path = 'label_map.json' # Path to your saved label map file
with open(label_map_save_path, 'r') as f:
reverse_label_map = json.load(f)
# Convert keys back to integers if they were saved as strings
reverse_label_map = {int(k): v for k, v in reverse_label_map.items()}
# Set device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
loaded_model.to(device)
print("Model and label map loaded successfully.")
```
2. **Preprocess the New Image:**
Apply the same transformations used for the validation/test sets.
```python
# Define the same transformations used for validation/testing
inference_transforms = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
# Example: Load and preprocess a new image
new_image_path = 'path/to/your/new_bird_image.jpg' # Replace with the actual path to your new image
try:
new_image = Image.open(new_image_path).convert('RGB') # Ensure image is RGB
input_tensor = inference_transforms(new_image).unsqueeze(0) # Add batch dimension
input_tensor = input_tensor.to(device)
print("Image preprocessed successfully.")
except FileNotFoundError:
print(f"Error: New image not found at {new_image_path}")
except Exception as e:
print(f"An error occurred during image preprocessing: {e}")
```
3. **Make a Prediction:**
Pass the preprocessed image tensor through the loaded model.
```python
# Make a prediction
if 'input_tensor' in locals(): # Check if input_tensor was created successfully
with torch.no_grad():
output = loaded_model(input_tensor)
_, predicted_class_index = torch.max(output, 1);
# Convert the predicted class index back to the bird family name
predicted_bird_family = reverse_label_map[predicted_class_index.item()]
print(f"The predicted bird family for the new image is: {predicted_bird_family}")
```
|
Sumedhrk/ppo-LunarLander-v2
|
Sumedhrk
| 2025-08-06T00:13:41Z | 19 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-06T00:13:24Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.35 +/- 17.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dandan04/medgemma-4b-it-sft-lora-crc100k
|
dandan04
| 2025-08-06T00:12:44Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-07-12T16:20:00Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-crc100k
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for medgemma-4b-it-sft-lora-crc100k
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dandan04/medgemma-4b-it-sft-lora-crc100k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
John6666/nemesis-zyx-v10-sdxl
|
John6666
| 2025-08-06T00:11:42Z | 4 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"styles",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.0",
"base_model:finetune:Laxhar/noobai-XL-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-06T00:04:37Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- styles
- noobai
- illustrious
base_model: Laxhar/noobai-XL-1.0
---
Original model is [here](https://civitai.com/models/1841312/nemesiszyx).
This model created by [guythis31773](https://civitai.com/user/guythis31773).
|
giovannidemuri/llama3b-llamab8-er-afg-v48-seed2-hx-alpaca-instruct_lora
|
giovannidemuri
| 2025-08-06T00:08:31Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2025-08-05T21:10:11Z |
---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-3B-Instruct
tags:
- generated_from_trainer
model-index:
- name: llama3b-llamab8-er-afg-v48-seed2-hx-alpaca-instruct_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3b-llamab8-er-afg-v48-seed2-hx-alpaca-instruct_lora
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.0
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.2
|
reaperdoesntknow/test
|
reaperdoesntknow
| 2025-08-06T00:03:52Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"symbiotic",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T00:03:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
moetazrahall/drbert7-finetuned-test
|
moetazrahall
| 2025-08-06T00:00:15Z | 2 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"camembert",
"text-classification",
"generated_from_trainer",
"base_model:Dr-BERT/DrBERT-7GB",
"base_model:finetune:Dr-BERT/DrBERT-7GB",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-05T23:59:52Z |
---
library_name: transformers
license: apache-2.0
base_model: Dr-BERT/DrBERT-7GB
tags:
- generated_from_trainer
model-index:
- name: drbert7-finetuned-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# drbert7-finetuned-test
This model is a fine-tuned version of [Dr-BERT/DrBERT-7GB](https://huggingface.co/Dr-BERT/DrBERT-7GB) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.0395
- eval_model_preparation_time: 0.0077
- eval_accuracy: 0.0
- eval_precision: 0.0
- eval_recall: 0.0
- eval_f1: 0.0
- eval_runtime: 1.2365
- eval_samples_per_second: 0.809
- eval_steps_per_second: 0.809
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.2
|
nvidia/mel-codec-44khz
|
nvidia
| 2025-08-05T23:53:51Z | 97 | 7 |
nemo
|
[
"nemo",
"feature-extraction",
"arxiv:2010.05646",
"arxiv:2309.15505",
"arxiv:2406.05298",
"license:other",
"region:us"
] |
feature-extraction
| 2024-12-06T18:40:22Z |
---
license: other
license_name: nsclv1
license_link: https://developer.nvidia.com/downloads/license/nsclv1
pipeline_tag: feature-extraction
---
# NVIDIA NeMo Mel Codec 44khz
<style>
img {
display: inline-table;
vertical-align: small;
margin: 0;
padding: 0;
}
</style>
[](#model-architecture)
| [](#model-architecture)
| [](#datasets)
The NeMo Mel Codec is a neural audio codec which compresses mel-spectrograms into a quantized representation and reconstructs audio. The model can be used as a vocoder for speech synthesis.
The model works with full-bandwidth 44.1kHz speech. It might have lower performance with low-bandwidth speech (e.g. 16kHz speech upsampled to 44.1kHz) or with non-speech audio.
| Sample Rate | Frame Rate | Bit Rate | # Codebooks | Codebook Size | Embed Dim | FSQ Levels |
|:-----------:|:----------:|:----------:|:-----------:|:-------------:|:-----------:|:------------:|
| 44100 | 86.1 | 6.9kpbs | 8 | 1000 | 32 | [8, 5, 5, 5] |
## Model Architecture
The NeMo Mel Codec model uses a residual network encoder and [HiFi-GAN](https://arxiv.org/abs/2010.05646) decoder. We use [Finite Scalar Quantization (FSQ)](https://arxiv.org/abs/2309.15505), with 8 codebooks and 1000 entries per codebook.
For more details please refer to [our paper](https://arxiv.org/abs/2406.05298).
### Input
- **Input Type:** Audio
- **Input Format(s):** .wav files
- **Input Parameters:** One-Dimensional (1D)
- **Other Properties Related to Input:** 44100 Hz Mono-channel Audio
### Output
- **Output Type**: Audio
- **Output Format:** .wav files
- **Output Parameters:** One Dimensional (1D)
- **Other Properties Related to Output:** 44100 Hz Mono-channel Audio
## How to Use this Model
The model is available for use in the [NVIDIA NeMo](https://github.com/NVIDIA/NeMo), and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Inference
For inference, you can follow our [Audio Codec Inference Tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/tts/Audio_Codec_Inference.ipynb) which automatically downloads the model checkpoint. Note that you will need to set the ```model_name``` parameter to "nvidia/mel-codec-44khz".
Alternatively, you can use the code below, which also handles the automatic checkpoint download:
```
import librosa
import torch
import soundfile as sf
from nemo.collections.tts.models import AudioCodecModel
model_name = "nvidia/mel-codec-44khz"
path_to_input_audio = ??? # path of the input audio
path_to_output_audio = ??? # path of the reconstructed output audio
nemo_codec_model = AudioCodecModel.from_pretrained(model_name).eval()
# get discrete tokens from audio
audio, _ = librosa.load(path_to_input_audio, sr=nemo_codec_model.sample_rate)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
audio_tensor = torch.from_numpy(audio).unsqueeze(dim=0).to(device)
audio_len = torch.tensor([audio_tensor[0].shape[0]]).to(device)
with torch.no_grad():
encoded_tokens, encoded_len = nemo_codec_model.encode(audio=audio_tensor, audio_len=audio_len)
# Reconstruct audio from tokens
reconstructed_audio, _ = nemo_codec_model.decode(tokens=encoded_tokens, tokens_len=encoded_len)
# save reconstructed audio
output_audio = reconstructed_audio.cpu().numpy().squeeze()
sf.write(path_to_output_audio, output_audio, nemo_codec_model.sample_rate)
```
### Training
For fine-tuning on another dataset please follow the steps available at our [Audio Codec Training Tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/tts/Audio_Codec_Training.ipynb). Note that you will need to set the ```CONFIG_FILENAME``` parameter to the "mel_codec_22050.yaml" config. You also will need to set ```pretrained_model_name``` to "nvidia/mel-codec-44khz".
## Training, Testing, and Evaluation Datasets:
### Training Datasets
The NeMo Audio Codec is trained on a total of 14.2k hrs of speech data from 79 languages.
- [MLS English](https://www.openslr.org/94/) - 12.8k hours, 2.8k speakers, English
- [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) - 1.4k hours, 50k speakers, 79 languages.
### Test Datasets
- [MLS English](https://www.openslr.org/94/) - 15 hours, 42 speakers, English
- [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) - 2 hours, 1356 speakers, 59 languages
## Performance
We evaluate our codec using several objective audio quality metrics. We evaluate [ViSQOL](https://github.com/google/visqol) and [PESQ](https://lightning.ai/docs/torchmetrics/stable/audio/perceptual_evaluation_speech_quality.html) for perception quality, [ESTOI](https://ieeexplore.ieee.org/document/7539284) for intelligbility, and mel spectrogram and STFT distances for spectral reconstruction accuracy. Metrics are reported on the test set for both the MLS English and CommonVoice data. The model has not been trained or evaluated on non-speech audio.
| Dataset | ViSQOL |PESQ |ESTOI |Mel Distance |STFT Distance|
|:-----------:|:----------:|:----------:|:----------:|:-----------:|:-----------:|
| MLS English | 4.51 | 3.20 | 0.92 | 0.092 | 0.032 |
| CommonVoice | 4.52 | 2.93 | 0.90 | 0.126 | 0.054 |
## Software Integration
### Supported Hardware Microarchitecture Compatibility:
- NVIDIA Ampere
- NVIDIA Blackwell
- NVIDIA Jetson
- NVIDIA Hopper
- NVIDIA Lovelace
- NVIDIA Pascal
- NVIDIA Turing
- NVIDIA Volta
### Runtime Engine
- Nemo 2.0.0
### Preferred Operating System
- Linux
## License/Terms of Use
This model is for research and development only (non-commercial use) and the license to use this model is covered by the [NSCLv1](https://developer.nvidia.com/downloads/license/nsclv1).
## Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
nvidia/audio-codec-22khz
|
nvidia
| 2025-08-05T23:52:39Z | 77 | 5 |
nemo
|
[
"nemo",
"feature-extraction",
"arxiv:2010.05646",
"arxiv:2309.15505",
"arxiv:2406.05298",
"arxiv:1811.02508",
"license:other",
"region:us"
] |
feature-extraction
| 2024-12-04T01:01:52Z |
---
license: other
license_name: nsclv1
license_link: https://developer.nvidia.com/downloads/license/nsclv1
pipeline_tag: feature-extraction
---
# NVIDIA NeMo Audio Codec 22khz
<style>
img {
display: inline-table;
vertical-align: small;
margin: 0;
padding: 0;
}
</style>
[](#model-architecture)
| [](#model-architecture)
| [](#datasets)
The NeMo Audio Codec is a neural audio codec which compresses audio into a quantized representation. The model can be used as a vocoder for speech synthesis.
The model works with full-bandwidth 22.05kHz speech. It might have lower performance with low-bandwidth speech (e.g. 16kHz speech upsampled to 22.05kHz) or with non-speech audio.
| Sample Rate | Frame Rate | Bit Rate | # Codebooks | Codebook Size | Embed Dim | FSQ Levels |
|:-----------:|:----------:|:----------:|:-----------:|:-------------:|:-----------:|:------------:|
| 22050 | 86.1 | 6.9kpbs | 8 | 1000 | 32 | [8, 5, 5, 5] |
## Model Architecture
The NeMo Audio Codec model uses symmetric convolutional encoder-decoder architecture based on [HiFi-GAN](https://arxiv.org/abs/2010.05646). We use [Finite Scalar Quantization (FSQ)](https://arxiv.org/abs/2309.15505), with 8 codebooks and 1000 entries per codebook.
For more details please refer to [our paper](https://arxiv.org/abs/2406.05298).
### Input
- **Input Type:** Audio
- **Input Format(s):** .wav files
- **Input Parameters:** One-Dimensional (1D)
- **Other Properties Related to Input:** 22050 Hz Mono-channel Audio
### Output
- **Output Type**: Audio
- **Output Format:** .wav files
- **Output Parameters:** One Dimensional (1D)
- **Other Properties Related to Output:** 22050 Hz Mono-channel Audio
## How to Use this Model
The model is available for use in the [NVIDIA NeMo](https://github.com/NVIDIA/NeMo), and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Inference
For inference, you can follow our [Audio Codec Inference Tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/tts/Audio_Codec_Inference.ipynb) which automatically downloads the model checkpoint. Note that you will need to set the ```model_name``` parameter to "nvidia/audio-codec-22khz".
Alternatively, you can use the code below, which also handles the automatic checkpoint download:
```
import librosa
import torch
import soundfile as sf
from nemo.collections.tts.models import AudioCodecModel
model_name = "nvidia/audio-codec-22khz"
path_to_input_audio = ??? # path of the input audio
path_to_output_audio = ??? # path of the reconstructed output audio
nemo_codec_model = AudioCodecModel.from_pretrained(model_name).eval()
# get discrete tokens from audio
audio, _ = librosa.load(path_to_input_audio, sr=nemo_codec_model.sample_rate)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
audio_tensor = torch.from_numpy(audio).unsqueeze(dim=0).to(device)
audio_len = torch.tensor([audio_tensor[0].shape[0]]).to(device)
with torch.no_grad():
encoded_tokens, encoded_len = nemo_codec_model.encode(audio=audio_tensor, audio_len=audio_len)
# Reconstruct audio from tokens
reconstructed_audio, _ = nemo_codec_model.decode(tokens=encoded_tokens, tokens_len=encoded_len)
# save reconstructed audio
output_audio = reconstructed_audio.cpu().numpy().squeeze()
sf.write(path_to_output_audio, output_audio, nemo_codec_model.sample_rate)
```
### Training
For fine-tuning on another dataset please follow the steps available at our [Audio Codec Training Tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/tts/Audio_Codec_Training.ipynb). Note that you will need to set the ```CONFIG_FILENAME``` parameter to the "audio_codec_22050.yaml" config. You also will need to set ```pretrained_model_name``` to "nvidia/audio-codec-22khz".
## Training, Testing, and Evaluation Datasets:
### Training Datasets
The NeMo Audio Codec is trained on a total of 28.7k hrs of speech data from 105 languages.
- [MLS English](https://www.openslr.org/94/) - 25.5k hours, 4.3k speakers, English
- [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) - 3.2k hours, 100k speakers, 105 languages.
### Test Datasets
- [MLS English](https://www.openslr.org/94/) - 15 hours, 42 speakers, English
- [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) - 2 hours, 1356 speakers, 59 languages
## Performance
We evaluate our codec using several objective audio quality metrics. We evaluate [ViSQOL](https://github.com/google/visqol) and [PESQ](https://lightning.ai/docs/torchmetrics/stable/audio/perceptual_evaluation_speech_quality.html) for perception quality, [ESTOI](https://ieeexplore.ieee.org/document/7539284) for intelligbility, mel spectrogram and STFT distances for spectral reconstruction accuracy, and [SI-SDR](https://arxiv.org/abs/1811.02508) for phase reconstruction accuracy. Metrics are reported on the test set for both the MLS English and CommonVoice data. The model has not been trained or evaluated on non-speech audio.
| Dataset | ViSQOL |PESQ |ESTOI |Mel Distance |STFT Distance|SI-SDR |
|:-----------:|:----------:|:----------:|:----------:|:-----------:|:-----------:|:-----------:|
| MLS English | 4.50 | 3.69 | 0.94 | 0.066 | 0.033 | 8.33 |
| CommonVoice | 4.53 | 3.55 | 0.93 | 0.100 | 0.057 | 7.63 |
## Software Integration
### Supported Hardware Microarchitecture Compatibility:
- NVIDIA Ampere
- NVIDIA Blackwell
- NVIDIA Jetson
- NVIDIA Hopper
- NVIDIA Lovelace
- NVIDIA Pascal
- NVIDIA Turing
- NVIDIA Volta
### Runtime Engine
- Nemo 2.0.0
### Preferred Operating System
- Linux
## License/Terms of Use
This model is for research and development only (non-commercial use) and the license to use this model is covered by the [NSCLv1](https://developer.nvidia.com/downloads/license/nsclv1).
## Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
ericksanches/erick
|
ericksanches
| 2025-08-05T23:48:08Z | 2 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-05T23:06:53Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: erick
---
# Erick
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `erick` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "erick",
"lora_weights": "https://huggingface.co/ericksanches/erick/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ericksanches/erick', weight_name='lora.safetensors')
image = pipeline('erick').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ericksanches/erick/discussions) to add images that show off what youβve made with this LoRA.
|
nightmedia/Light-IF-4B-q6-hi-mlx
|
nightmedia
| 2025-08-05T23:43:28Z | 7 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:qihoo360/Light-IF-4B",
"base_model:quantized:qihoo360/Light-IF-4B",
"license:apache-2.0",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-05T21:07:01Z |
---
license: apache-2.0
base_model: qihoo360/Light-IF-4B
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# Light-IF-4B-q6-hi-mlx
This model [Light-IF-4B-q6-hi-mlx](https://huggingface.co/Light-IF-4B-q6-hi-mlx) was
converted to MLX format from [qihoo360/Light-IF-4B](https://huggingface.co/qihoo360/Light-IF-4B)
using mlx-lm version **0.26.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Light-IF-4B-q6-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
LimitLess1/yald
|
LimitLess1
| 2025-08-05T23:31:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T23:31:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rayraycano/gpt-oss-20b-multilingual-reasoner
|
rayraycano
| 2025-08-05T23:26:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T21:36:10Z |
---
base_model: openai/gpt-oss-20b
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rayraycano/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Hiwang1212/my
|
Hiwang1212
| 2025-08-05T23:13:08Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-05-25T17:11:44Z |
---
license: bigcode-openrail-m
---
|
angelbeshirov/q-FrozenLake-v1-4x4-noSlippery
|
angelbeshirov
| 2025-08-05T23:12:23Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-05T23:10:59Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="angelbeshirov/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
prithivMLmods/II-Search-CIR-4B-GGUF
|
prithivMLmods
| 2025-08-05T23:12:19Z | 271 | 2 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"text-generation",
"en",
"base_model:Intelligent-Internet/II-Search-CIR-4B",
"base_model:quantized:Intelligent-Internet/II-Search-CIR-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-05T07:46:41Z |
---
license: apache-2.0
base_model:
- Intelligent-Internet/II-Search-CIR-4B
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
---
# **II-Search-CIR-4B-GGUF**
> II-Search-CIR-4B is a 4-billion-parameter language model built on Qwen3-4B and enhanced with Code-Integrated Reasoning (CIR), enabling it not only to call external tools (such as web search and web visit) through code blocks during inference, but also to programmatically process, filter, and reason over results within those code blocks; optimized through supervised fine-tuning and reinforcement learning on challenging reasoning datasets, the model achieves state-of-the-art or leading results on major factual QA and information-seeking benchmarks (like OpenAI/SimpleQA, Google/Frames, and Seal_0), and it can be efficiently deployed using vLLM or SGLang with up to 128k-token contexts (with YaRN RoPE scaling), supporting advanced research, educational, and web-integrated applications, with datasets, code samples, and evaluation results provided in the official Hugging Face repository.
# Model Files
| File Name | Size | Quant Type |
|-----------|------|------------|
| II-Search-4B-GGUF.BF16.gguf | 8.05 GB | BF16 |
| II-Search-4B-GGUF.F16.gguf | 8.05 GB | F16 |
| II-Search-4B-GGUF.F32.gguf | 16.1 GB | F32 |
| II-Search-4B-GGUF.Q2_K.gguf | 1.67 GB | Q2_K |
| II-Search-4B-GGUF.Q3_K_L.gguf | 2.24 GB | Q3_K_L |
| II-Search-4B-GGUF.Q3_K_M.gguf | 2.08 GB | Q3_K_M |
| II-Search-4B-GGUF.Q3_K_S.gguf | 1.89 GB | Q3_K_S |
| II-Search-4B-GGUF.Q4_K_M.gguf | 2.5 GB | Q4_K_M |
| II-Search-4B-GGUF.Q4_K_S.gguf | 2.38 GB | Q4_K_S |
| II-Search-4B-GGUF.Q5_K_M.gguf | 2.89 GB | Q5_K_M |
| II-Search-4B-GGUF.Q5_K_S.gguf | 2.82 GB | Q5_K_S |
| II-Search-4B-GGUF.Q6_K.gguf | 3.31 GB | Q6_K |
| II-Search-4B-GGUF.Q8_0.gguf | 4.28 GB | Q8_0 |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

|
Montor58/121
|
Montor58
| 2025-08-05T23:11:37Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-05T23:11:37Z |
---
license: apache-2.0
---
|
zijian2022/y3_act_asr
|
zijian2022
| 2025-08-05T23:07:33Z | 10 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:zijian2022/y3",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-05T23:07:28Z |
---
datasets: zijian2022/y3
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- robotics
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
JoelWDODEJPEDIPEOIJROIE/Bobby
|
JoelWDODEJPEDIPEOIJROIE
| 2025-08-05T23:06:43Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-3.0",
"region:us"
] | null | 2025-08-05T23:06:43Z |
---
license: cc-by-nc-nd-3.0
---
|
yestechies/fine_tuned_roberta_weighted
|
yestechies
| 2025-08-05T23:03:02Z | 12 | 0 | null |
[
"safetensors",
"roberta",
"sentiment",
"text-classification",
"en",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] |
text-classification
| 2025-08-05T22:51:13Z |
---
license: mit
language:
- en
base_model:
- FacebookAI/roberta-base
pipeline_tag: text-classification
tags:
- sentiment
---
|
Balab2021/gpt-oss-20b-multilingual-reasoner
|
Balab2021
| 2025-08-05T23:00:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"dataset:Balab2021/openai20bMultilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T22:29:06Z |
---
base_model: openai/gpt-oss-20b
datasets: Balab2021/openai20bMultilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [Balab2021/openai20bMultilingual-Thinking](https://huggingface.co/datasets/Balab2021/openai20bMultilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Balab2021/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
atahanuz/6aug_deneme_bf16
|
atahanuz
| 2025-08-05T22:59:12Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T22:52:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kabeyweera-dm/finetune-qwen3-0.6B
|
kabeyweera-dm
| 2025-08-05T22:58:06Z | 5 | 0 | null |
[
"safetensors",
"qwen3",
"text-generation",
"fine-tuning",
"custom-csv-dataset",
"conversational",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-05T22:57:08Z |
---
base_model: Qwen/Qwen3-0.6B
tags:
- text-generation
- fine-tuning
- custom-csv-dataset
license: mit
---
This model was fully fine-tuned from `Qwen/Qwen3-0.6B` on the Custom CSV Dataset.
|
prithivMLmods/II-Search-4B-GGUF
|
prithivMLmods
| 2025-08-05T22:55:17Z | 650 | 4 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"text-generation",
"en",
"base_model:Intelligent-Internet/II-Search-4B",
"base_model:quantized:Intelligent-Internet/II-Search-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-05T21:17:53Z |
---
license: apache-2.0
base_model:
- Intelligent-Internet/II-Search-4B
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
---
# **II-Search-4B-GGUF**
> II-Search-4B is a 4-billion-parameter language model fine-tuned from Qwen3-4B specifically for advanced information seeking and web-integrated reasoning tasks, demonstrating strong capabilities in multi-hop information retrieval, fact verification, and comprehensive report generation; it excels on factual QA benchmarks compared to peers, features sophisticated tool-use for search and web visits, supports distributed inference with vLLM or SGLang (including a 131,072-token context window with custom RoPE scaling), and is suitable for factual question answering, research assistance, and educational applications, with Apple Silicon support via MLX, open integration examples, and full resources available on its Hugging Face repository.
# Model Files
| File Name | Size | Quant Type |
|-----------|------|------------|
| II-Search-4B-GGUF.BF16.gguf | 8.05 GB | BF16 |
| II-Search-4B-GGUF.F16.gguf | 8.05 GB | F16 |
| II-Search-4B-GGUF.F32.gguf | 16.1 GB | F32 |
| II-Search-4B-GGUF.Q2_K.gguf | 1.67 GB | Q2_K |
| II-Search-4B-GGUF.Q3_K_L.gguf | 2.24 GB | Q3_K_L |
| II-Search-4B-GGUF.Q3_K_M.gguf | 2.08 GB | Q3_K_M |
| II-Search-4B-GGUF.Q3_K_S.gguf | 1.89 GB | Q3_K_S |
| II-Search-4B-GGUF.Q4_K_M.gguf | 2.5 GB | Q4_K_M |
| II-Search-4B-GGUF.Q4_K_S.gguf | 2.38 GB | Q4_K_S |
| II-Search-4B-GGUF.Q5_K_M.gguf | 2.89 GB | Q5_K_M |
| II-Search-4B-GGUF.Q5_K_S.gguf | 2.82 GB | Q5_K_S |
| II-Search-4B-GGUF.Q6_K.gguf | 3.31 GB | Q6_K |
| II-Search-4B-GGUF.Q8_0.gguf | 4.28 GB | Q8_0 |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

|
mmdbes/Fake-image-detection
|
mmdbes
| 2025-08-05T22:46:28Z | 0 | 0 | null |
[
"dataset:dragonintelligence/CIFAKE-image-dataset",
"license:apache-2.0",
"region:us"
] | null | 2025-08-05T22:39:39Z |
---
license: apache-2.0
datasets:
- dragonintelligence/CIFAKE-image-dataset
---
Real vs. AI-Generated Image Detection Model
This repository contains a deep learning model trained to classify images as either real or AI-generated (fake). The model was developed as part of a project on Big Data and leverages transfer learning with the powerful ResNet50 architecture. It achieves a test accuracy of 93.34% on the CIFAKE dataset. π
Model Architecture
The model uses a transfer learning approach, which involves two main components:
Base Model: A pre-trained ResNet50 model, with its weights trained on the ImageNet dataset, is used as a feature extractor. Its layers were frozen during training to retain the learned features.
Custom Classifier Head: A custom head was added on top of the base model to perform the final binary classification. It consists of:
A
Flatten layer.
A
Dense layer with 512 units and a ReLU activation function.
A
Dropout layer with a rate of 0.5 to prevent overfitting.
A final
Dense output layer with a Sigmoid activation function for binary classification.
Dataset
The model was trained on the
CIFAKE dataset, which is available on Kaggle. This dataset contains a large collection of images, split into two classes:
REAL: Authentic, real-world photographs.
FAKE: Images synthetically generated by AI models.
To improve the model's ability to generalize and to prevent overfitting,
data augmentation techniques were applied to the training set, including random flips, rotations, and zooms.
Training & Performance
Training Environment: The model was trained on Google Colab using an NVIDIA T4 GPU.
Training Process: The model was trained for 20 epochs using the Adam optimizer and binary_crossentropy loss function.
ModelCheckpoint was used to save the best weights after each epoch.
Final Performance: The final model was evaluated on an unseen test set and achieved the following results:
Test Accuracy: 93.34%
Precision (for REAL class): 95%
Recall (for FAKE class): 95%
The model's performance slightly exceeds the 93% accuracy reported in the dataset's base paper, demonstrating its effectiveness.
Confusion Matrix
The confusion matrix below shows a detailed breakdown of the model's performance on the 20,000 images in the test set.
|
Josephgflowers/Phinance-Phi-4-mini-instruct-finance-v0.4-with-reasoning
|
Josephgflowers
| 2025-08-05T22:45:48Z | 1 | 0 | null |
[
"safetensors",
"phi3",
"finance",
"fine-tuning",
"conversational-ai",
"named-entity-recognition",
"sentiment-analysis",
"topic-classification",
"rag",
"multilingual",
"lightweight-llm",
"phi-architecture",
"custom_code",
"dataset:Josephgflowers/Finance-Instruct-500k",
"dataset:GAIR/LIMO",
"dataset:TheFinAI/Fino1_Reasoning_Path_FinQA",
"dataset:Jarrodbarnes/cortex-1-market-analysis",
"dataset:Josephgflowers/Finance_Curriculum_Edu_English",
"dataset:Josephgflowers/Finance-Curriculum-Edu-Arabic",
"dataset:Josephgflowers/Finance-Curriculum-Edu-Uzbek",
"dataset:Josephgflowers/Finance-Curriculum-Edu-Multilingual",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:finetune:microsoft/Phi-4-mini-instruct",
"license:mit",
"region:us"
] | null | 2025-05-13T00:00:10Z |
---
license: mit
tags:
- finance
- fine-tuning
- conversational-ai
- named-entity-recognition
- sentiment-analysis
- topic-classification
- rag
- multilingual
- lightweight-llm
- phi-architecture
datasets:
- Josephgflowers/Finance-Instruct-500k
- GAIR/LIMO
- TheFinAI/Fino1_Reasoning_Path_FinQA
- Jarrodbarnes/cortex-1-market-analysis
- Josephgflowers/Finance_Curriculum_Edu_English
- Josephgflowers/Finance-Curriculum-Edu-Arabic
- Josephgflowers/Finance-Curriculum-Edu-Uzbek
- Josephgflowers/Finance-Curriculum-Edu-Multilingual
base_model:
- microsoft/Phi-4-mini-instruct
- microsoft/Phi-4-mini-reasoning
---

# Phinance-Phi-4-mini-instruct-finance-v0.4-with-reasoning
## Overview
**Phinance-Phi-4-mini-instruct-finance-v0.4-with-reasoning** is a compact, fine-tuned model built on top of [microsoft/Phi-4-mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct) with a strong emphasis on structured financial reasoning and instruction-following. This release blends financial QA, reasoning chains, and RAG-ready formatting into a lightweight agent optimized for advanced finance applications.
This model is particularly good at producing structured outputs like JSON, following instruction patterns, and chaining logical steps when prompted with tags like `<thinking>`.
This model also outperforms the base models on multi language capabilities.
---
## π **Latest Training Run: Finance Curriculum Reasoning Expansion**
After v0.4, the model was further fine-tuned on newly released multilingual finance reasoning datasets, explicitly targeting real-world coverage gaps in non-English finance QA:
- [Finance Curriculum Edu English](https://huggingface.co/datasets/Josephgflowers/Finance_Curriculum_Edu_English)
- [Finance-Curriculum-Edu-Arabic](https://huggingface.co/datasets/Josephgflowers/Finance-Curriculum-Edu-Arabic)
- [Finance-Curriculum-Edu-Uzbek](https://huggingface.co/datasets/Josephgflowers/Finance-Curriculum-Edu-Uzbek)
- [Finance-Curriculum-Edu-Multilingual](https://huggingface.co/datasets/Josephgflowers/Finance-Curriculum-Edu-Multilingual)
**This training phase addressed:**
- Conceptual reasoning and QA coherence across 60+ languages
- Robustness to diverse phrasing, financial domains, and real-world curriculum topics
- Further reduction of hallucinations and improved answer structure, especially for small and mid-sized LMs in non-English settings
---
## Model Workflow & Training Strategy
### 1. **Initial Fine-Tune**
* Base: `Phi-4-mini-instruct`
* Dataset: [Finance-Instruct-500k](https://huggingface.co/datasets/Josephgflowers/Finance-Instruct-500k)
* Additional datasets: [LIMO](https://huggingface.co/datasets/GAIR/LIMO), [Fin01](https://huggingface.co/datasets/TheFinAI/Fino1_Reasoning_Path_FinQA)
### 2. **Back Merge**
* Model was merged back with `Phi-4-mini-instruct` to retain broad instruction capability.
### 3. **Reasoning Augmentation**
* Generated question-answer sets from `Finance-Instruct-500k` using reasoning system prompts
* Filtered for format quality and signal-to-noise ratio
* Trained again on: generated reasoning dataset, LIMO, Fin01
### 4. **Final Merge**
* Merged with `Phi-4-mini-reasoning` to strengthen chain-of-thought behavior
### 5. **Reason Pass**
* Trained on:
- Filtered reasoning subset of self-generated 500k QA set
- [Cortex-1 Market Analysis](https://huggingface.co/datasets/Jarrodbarnes/cortex-1-market-analysis)
- LIMO
- Fin01
### 6. **Finance Curriculum Reasoning Expansion (NEW)**
* Trained with four new datasets targeting real-world and multilingual finance curriculum QA:
* Generated question-answer sets from `Finance-Instruct-500k` using reasoning system prompts
* Filtered for format quality and signal-to-noise ratio
* Self generated reasoning dataset using finance topics. Human preference optimized.
* Trained again on: generated reasoning dataset, LIMO, Fin01
- [Finance Curriculum Edu English](https://huggingface.co/datasets/Josephgflowers/Finance_Curriculum_Edu_English)
- [Finance-Curriculum-Edu-Arabic](https://huggingface.co/datasets/Josephgflowers/Finance-Curriculum-Edu-Arabic)
- [Finance-Curriculum-Edu-Uzbek](https://huggingface.co/datasets/Josephgflowers/Finance-Curriculum-Edu-Uzbek)
- [Finance-Curriculum-Edu-Multilingual](https://huggingface.co/datasets/Josephgflowers/Finance-Curriculum-Edu-Multilingual)
---
## Key Capabilities
* **Financial Reasoning**: Great at multi-step reasoning across investment strategies, reports, and economic topics
* **Instruction Following**: Precise response formatting with few-shot or system messages
* **Multi-Turn Dialogues**: Maintains context across long conversations
* **Structured Output**: NER, parsing, and tagging tasks return valid JSON by default
* **RAG-Compatible**: Handles prepended external context in the user field
* **Tag-Aware**: Supports `<thinking>` tags to guide reasoning chains
* **Multilingual Finance QA**: Expanded coverage in 60+ languages for curriculum-based financial topics
---
## Usage Tips
* Use system messages like:
```
You are a financial assistant that explains your reasoning step by step. Use <thinking>...</thinking> to wrap your reasoning.
````
* Expect JSON-style outputs for tasks like:
* Entity extraction
* Address parsing
* XBRL tagging
---
## Example
```json
{
"system": "You are a financial reasoning assistant. Use <thinking> to show your steps.",
"user": "<context>ABC Inc reported a quarterly revenue increase of 12% while cutting debt by 8%</context>\nWhat does this indicate about the companyβs short-term stability?",
"assistant": "<thinking>This revenue increase suggests improved sales or pricing power. Debt reduction enhances cash flow and reduces risk. Together, they signal improved short-term financial health.</thinking> It indicates strong short-term stability."
}
````
---
## Model Details
* **Base**: Phi-4-mini-instruct
* **Architecture**: \~3.8B params (mini)
* **Version**: v0.4 + Multilingual Curriculum Reasoning Expansion
* **License**: MIT
* **Framework**: Hugging Face Transformers
---
## Citation
```bibtex
@model{josephgflowers2025phinancephi4,
title={Phinance-Phi-4-mini-instruct-finance-v0.4-with-reasoning},
author={Joseph G. Flowers},
year={2025},
url={https://huggingface.co/Josephgflowers/Phinance-Phi-4-mini-instruct-finance-v0.4-with-reasoning}
}
```
|
mradermacher/Majority-Voting-Qwen3-8B-Base-GGUF
|
mradermacher
| 2025-08-05T22:44:10Z | 271 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:TMLR-Group-HF/Majority-Voting-Qwen3-8B-Base",
"base_model:quantized:TMLR-Group-HF/Majority-Voting-Qwen3-8B-Base",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T14:54:58Z |
---
base_model: TMLR-Group-HF/Majority-Voting-Qwen3-8B-Base
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/TMLR-Group-HF/Majority-Voting-Qwen3-8B-Base
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Majority-Voting-Qwen3-8B-Base-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Comfy-Org/mochi_preview_repackaged
|
Comfy-Org
| 2025-08-05T22:39:31Z | 20,752 | 70 |
diffusion-single-file
|
[
"diffusion-single-file",
"comfyui",
"base_model:genmo/mochi-1-preview",
"base_model:finetune:genmo/mochi-1-preview",
"license:apache-2.0",
"region:us"
] | null | 2024-11-02T19:02:33Z |
---
license: apache-2.0
base_model:
- genmo/mochi-1-preview
tags:
- diffusion-single-file
- comfyui
---
[Mochi Preview](https://huggingface.co/genmo/mochi-1-preview) repackaged for easier use with [ComfyUI](https://github.com/comfyanonymous/ComfyUI)
See the examples [here](https://comfyanonymous.github.io/ComfyUI_examples/mochi/)
|
Comfy-Org/HunyuanVideo_repackaged
|
Comfy-Org
| 2025-08-05T22:38:29Z | 176,713 | 218 |
diffusion-single-file
|
[
"diffusion-single-file",
"comfyui",
"region:us"
] | null | 2024-12-17T08:03:48Z |
---
tags:
- diffusion-single-file
- comfyui
---
Hunyuan Video repackaged for ComfyUI use.
For workflows see: https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_video/
|
vijayakumaran92/Unmodel-2-year-old-boy-1
|
vijayakumaran92
| 2025-08-05T22:36:08Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-08-05T22:29:03Z |
---
license: cc-by-nc-sa-4.0
---
|
atahanuz/7aug_deneme
|
atahanuz
| 2025-08-05T22:30:16Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T22:18:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
unsloth/GLM-4.5-GGUF
|
unsloth
| 2025-08-05T22:19:57Z | 4,876 | 13 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"en",
"zh",
"base_model:zai-org/GLM-4.5",
"base_model:quantized:zai-org/GLM-4.5",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-08-05T09:21:46Z |
---
license: mit
language:
- en
- zh
pipeline_tag: text-generation
library_name: transformers
base_model:
- zai-org/GLM-4.5
---
# GLM-4.5
<div align="center">
<img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/>
</div>
<p align="center">
π Join our <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community.
<br>
π Check out the GLM-4.5 <a href="https://z.ai/blog/glm-4.5" target="_blank">technical blog</a>.
<br>
π Use GLM-4.5 API services on <a href="https://docs.z.ai/guides/llm/glm-4.5">Z.ai API Platform (Global)</a> or <br> <a href="https://docs.bigmodel.cn/cn/guide/models/text/glm-4.5">Zhipu AI Open Platform (Mainland China)</a>.
<br>
π One click to <a href="https://chat.z.ai">GLM-4.5</a>.
</p>
## Model Introduction
The **GLM-4.5** series models are foundation models designed for intelligent agents. GLM-4.5 has **355** billion total parameters with **32** billion active parameters, while GLM-4.5-Air adopts a more compact design with **106** billion total parameters and **12** billion active parameters. GLM-4.5 models unify reasoning, coding, and intelligent agent capabilities to meet the complex demands of intelligent agent applications.
Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that provide two modes: thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses.
We have open-sourced the base models, hybrid reasoning models, and FP8 versions of the hybrid reasoning models for both GLM-4.5 and GLM-4.5-Air. They are released under the MIT open-source license and can be used commercially and for secondary development.
As demonstrated in our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieves exceptional performance with a score of **63.2**, in the **3rd** place among all the proprietary and open-source models. Notably, GLM-4.5-Air delivers competitive results at **59.8** while maintaining superior efficiency.

For more eval results, show cases, and technical details, please visit
our [technical blog](https://z.ai/blog/glm-4.5). The technical report will be released soon.
The model code, tool parser and reasoning parser can be found in the implementation of [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4_moe), [vLLM](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/glm4_moe_mtp.py) and [SGLang](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/glm4_moe.py).
## Quick Start
Please refer our [github page](https://github.com/zai-org/GLM-4.5) for more detail.
|
onnx-community/bge-base-en-v1.5-ONNX
|
onnx-community
| 2025-08-05T22:15:57Z | 19 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"en",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:quantized:BAAI/bge-base-en-v1.5",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-07-29T21:35:41Z |
---
license: mit
language:
- en
base_model:
- BAAI/bge-base-en-v1.5
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
library_name: transformers.js
---
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
You can then use the model to compute embeddings, as follows:
```js
import { pipeline } from '@huggingface/transformers';
// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'onnx-community/bge-base-en-v1.5-ONNX');
// Compute sentence embeddings
const texts = ['Hello world.', 'Example sentence.'];
const embeddings = await extractor(texts, { pooling: 'mean', normalize: true });
console.log(embeddings);
// Tensor {
// dims: [ 2, 768 ],
// type: 'float32',
// data: Float32Array(1536) [ 0.019079938530921936, 0.041718777269124985, ... ],
// size: 1536
// }
console.log(embeddings.tolist()); // Convert embeddings to a JavaScript list
// [
// [ 0.019079938530921936, 0.041718777269124985, 0.037672195583581924, ... ],
// [ 0.020936904475092888, 0.020080938935279846, -0.00787576474249363, ... ]
// ]
```
You can also use the model for retrieval. For example:
```js
import { pipeline, cos_sim } from '@huggingface/transformers';
// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'onnx-community/bge-base-en-v1.5-ONNX');
// List of documents you want to embed
const texts = [
'Hello world.',
'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.',
'I love pandas so much!',
];
// Compute sentence embeddings
const embeddings = await extractor(texts, { pooling: 'mean', normalize: true });
// Prepend recommended query instruction for retrieval.
const query_prefix = 'Represent this sentence for searching relevant passages: '
const query = query_prefix + 'What is a panda?';
const query_embeddings = await extractor(query, { pooling: 'mean', normalize: true });
// Sort by cosine similarity score
const scores = embeddings.tolist().map(
(embedding, i) => ({
id: i,
score: cos_sim(query_embeddings.data, embedding),
text: texts[i],
})
).sort((a, b) => b.score - a.score);
console.log(scores);
// [
// { id: 1, score: 0.7787772374597298, text: 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.' },
// { id: 2, score: 0.7071589521880506, text: 'I love pandas so much!' },
// { id: 0, score: 0.4252782730390429, text: 'Hello world.' }
// ]
```
---
|
zitongyang/gpt-oss-20b-multilingual-reasoner
|
zitongyang
| 2025-08-05T22:09:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T21:39:05Z |
---
base_model: openai/gpt-oss-20b
datasets: HuggingFaceH4/Multilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="zitongyang/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.8.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
djgagne2/wxformer_1h
|
djgagne2
| 2025-08-05T22:07:00Z | 0 | 0 | null |
[
"weather",
"climate",
"global",
"en",
"arxiv:2411.07814",
"license:apache-2.0",
"region:us"
] | null | 2025-01-24T00:16:23Z |
---
license: apache-2.0
language:
- en
tags:
- weather
- climate
- global
---
# NSF NCAR Community Research Earth Digital Intelligence Twin (CREDIT) WXFormer 1-Hour Model Weights and Configuration
This repository contains the PyTorch checkpoint weights and data/model configuration files for the CREDIT WXFormer 1-hour model.
More information about the training and verification of this model can be found in the Schreck et al. (2024) ArXiv [preprint](https://arxiv.org/abs/2411.07814).
## Data Access
Our model is trained on ERA5 Reanalysis Data from a subset of 16 of the 137 hybrid sigma-pressure vertical levels.
The raw data are available on the NSF NCAR [Research Data Archive](https://rda.ucar.edu/datasets/d633006/).
## Running the Model
To run the model, first install the [CREDIT package](https://github.com/NCAR/miles-credit) from github.
Modify the paths in the `finetune_final/model_predict.yml` configuration file to point to the appropriate ERA5 and scaler directories.
|
KRadim/distilbert-base-uncased-finetuned-imdb
|
KRadim
| 2025-08-05T22:02:57Z | 18 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-05T21:55:17Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4386
- Model Preparation Time: 0.002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.54.0
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.2
|
macinflu/lori
|
macinflu
| 2025-08-05T22:00:11Z | 13 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-05T21:22:51Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: lori
---
# Lori
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `lori` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "lori",
"lora_weights": "https://huggingface.co/macinflu/lori/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('macinflu/lori', weight_name='lora.safetensors')
image = pipeline('lori').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/macinflu/lori/discussions) to add images that show off what youβve made with this LoRA.
|
omarcalderon4/xlm-roberta-base-finetuned-panx-en
|
omarcalderon4
| 2025-08-05T21:57:05Z | 1 | 0 | null |
[
"safetensors",
"xlm-roberta",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-05T21:43:51Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3996
- F1: 0.6744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0185 | 1.0 | 50 | 0.5367 | 0.5332 |
| 0.49 | 2.0 | 100 | 0.4011 | 0.6929 |
| 0.3759 | 3.0 | 150 | 0.3996 | 0.6744 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.2
- Datasets 2.14.6
- Tokenizers 0.19.1
|
divyaswamy87/SmolLM-135M-Instruct-GRPO
|
divyaswamy87
| 2025-08-05T21:54:17Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"grpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T21:53:00Z |
---
library_name: transformers
tags:
- trl
- grpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
asdfghjklbwhuegcy/sss
|
asdfghjklbwhuegcy
| 2025-08-05T21:54:03Z | 0 | 0 | null |
[
"license:deepfloyd-if-license",
"region:us"
] | null | 2025-08-05T21:51:31Z |
---
license: deepfloyd-if-license
---
|
bcywinski/gemma-2-9b-it-taboo-ship
|
bcywinski
| 2025-08-05T21:53:19Z | 133 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-05-16T07:42:32Z |
---
base_model: google/gemma-2-9b-it
library_name: transformers
model_name: gemma-2-9b-it-taboo-ship
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-2-9b-it-taboo-ship
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bcywinski/gemma-2-9b-it-taboo-ship", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/barto/gemma-2-9b-it-taboo-final/runs/a08hq75d)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 2.21.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
KNdoschile/leonels
|
KNdoschile
| 2025-08-05T21:53:15Z | 1 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-05T21:23:56Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
Tonic/gpt-oss-20b-multilingual-reasoner
|
Tonic
| 2025-08-05T21:52:46Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:quantized:openai/gpt-oss-20b",
"autotrain_compatible",
"endpoints_compatible",
"mxfp4",
"region:us"
] |
text-generation
| 2025-08-05T19:23:15Z |
---
base_model: openai/gpt-oss-20b
datasets: HuggingFaceH4/Multilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Tonic/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.56.0.dev0
- Pytorch: 2.7.1+cu118
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ahadnagy/megablocks
|
ahadnagy
| 2025-08-05T21:47:34Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-07-29T15:59:29Z |
---
license: apache-2.0
---
|
mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF
|
mradermacher
| 2025-08-05T21:47:13Z | 337 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:jahyungu/Qwen2.5-Math-7B-Instruct_LeetCodeDataset",
"base_model:quantized:jahyungu/Qwen2.5-Math-7B-Instruct_LeetCodeDataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-05T17:06:25Z |
---
base_model: jahyungu/Qwen2.5-Math-7B-Instruct_LeetCodeDataset
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/jahyungu/Qwen2.5-Math-7B-Instruct_LeetCodeDataset
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Math-7B-Instruct_LeetCodeDataset.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
hamid1232/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-snappy_arctic_mole
|
hamid1232
| 2025-08-05T21:45:52Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am snappy_arctic_mole",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T21:44:50Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am snappy_arctic_mole
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
prithivMLmods/Qwen3-4B-MegaScience-GGUF
|
prithivMLmods
| 2025-08-05T21:45:47Z | 385 | 1 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"text-generation",
"en",
"base_model:MegaScience/Qwen3-4B-MegaScience",
"base_model:quantized:MegaScience/Qwen3-4B-MegaScience",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-05T07:48:18Z |
---
license: apache-2.0
base_model:
- MegaScience/Qwen3-4B-MegaScience
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
---
# **Qwen3-4B-MegaScience-GGUF**
> Qwen3-4B-MegaScience is a large language model fine-tuned on the MegaScience dataset, specifically designed for advanced scientific reasoning, and built on top of Qwen3-4B-Base; it leverages a meticulously curated set of 1.25 million high-quality scientific questions and answers sourced from university-level textbooks and various open datasets, covering seven scientific disciplines and evaluated across 15 benchmarks, demonstrating superior reasoning ability and training efficiency compared to existing open-source science models; the model supports seamless integration via the Hugging Face transformers library, operates efficiently with bfloat16 precision, and comes with an open-source dataset, evaluation pipeline, and reproducibility code, facilitating research and applications in scientific AI reasoning, with full resources, paper, and code available via the MegaScience official website and GitHub repository.
# Model Files
| File Name | Size | Quant Type |
|-----------|------|------------|
| Qwen3-4B-MegaScience.BF16.gguf | 8.05 GB | BF16 |
| Qwen3-4B-MegaScience.F16.gguf | 8.05 GB | F16 |
| Qwen3-4B-MegaScience.F32.gguf | 16.1 GB | F32 |
| Qwen3-4B-MegaScience.Q3_K_L.gguf | 2.24 GB | Q3_K_L |
| Qwen3-4B-MegaScience.Q3_K_S.gguf | 1.89 GB | Q3_K_S |
| Qwen3-4B-MegaScience.Q4_K_M.gguf | 2.5 GB | Q4_K_M |
| Qwen3-4B-MegaScience.Q4_K_S.gguf | 2.38 GB | Q4_K_S |
| Qwen3-4B-MegaScience.Q5_K_M.gguf | 2.89 GB | Q5_K_M |
| Qwen3-4B-MegaScience.Q5_K_S.gguf | 2.82 GB | Q5_K_S |
| Qwen3-4B-MegaScience.Q6_K.gguf | 3.31 GB | Q6_K |
| Qwen3-4B-MegaScience.Q8_0.gguf | 4.28 GB | Q8_0 |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

|
lmstudio-community/gpt-oss-20b-MLX-8bit
|
lmstudio-community
| 2025-08-05T21:43:15Z | 276,406 | 14 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"vllm",
"mlx",
"conversational",
"base_model:openai/gpt-oss-20b",
"base_model:quantized:openai/gpt-oss-20b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-05T21:32:53Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
- mlx
base_model: openai/gpt-oss-20b
---
## π« Community Model> gpt-oss-20b by openai
_πΎ [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)_.
**Model creator**: [openai](https://huggingface.co/openai)<br>
**Original model**: [gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)<br>
**MLX quantization**: provided by [LM Studio team](https://x.com/lmstudio) using [mlx_lm](https://github.com/ml-explore/mlx-lm)<br>
## Technical Details
8-bit quantized version of gpt-oss-20b using MLX, optimized for Apple Silicon.
## Special thanks
π Special thanks to the [Apple Machine Learning Research](https://github.com/ml-explore) team for creating [MLX](https://github.com/ml-explore/mlx).
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
Prathyusha101/reset_value_head_100x_lr
|
Prathyusha101
| 2025-08-05T21:36:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"dataset:trl-internal-testing/tldr-preference-sft-trl-style",
"arxiv:1909.08593",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T17:11:33Z |
---
datasets: trl-internal-testing/tldr-preference-sft-trl-style
library_name: transformers
model_name: reset_value_head_100x_lr
tags:
- generated_from_trainer
licence: license
---
# Model Card for reset_value_head_100x_lr
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [trl-internal-testing/tldr-preference-sft-trl-style](https://huggingface.co/datasets/trl-internal-testing/tldr-preference-sft-trl-style) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Prathyusha101/reset_value_head_100x_lr", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prathyusha1-the-university-of-texas-at-austin/huggingface/runs/du5dmjtf)
This model was trained with PPO, a method introduced in [Fine-Tuning Language Models from Human Preferences](https://huggingface.co/papers/1909.08593).
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.53.1
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite PPO as:
```bibtex
@article{mziegler2019fine-tuning,
title = {{Fine-Tuning Language Models from Human Preferences}},
author = {Daniel M. Ziegler and Nisan Stiennon and Jeffrey Wu and Tom B. Brown and Alec Radford and Dario Amodei and Paul F. Christiano and Geoffrey Irving},
year = 2019,
eprint = {arXiv:1909.08593}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mbegerez/gpt-oss-20b-multilingual-reasoner
|
mbegerez
| 2025-08-05T21:35:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T21:15:30Z |
---
base_model: openai/gpt-oss-20b
datasets: HuggingFaceH4/Multilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mbegerez/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.8.0.dev20250319+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nightmedia/Light-IF-4B-q6-mlx
|
nightmedia
| 2025-08-05T21:31:16Z | 3 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:qihoo360/Light-IF-4B",
"base_model:quantized:qihoo360/Light-IF-4B",
"license:apache-2.0",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-05T20:23:33Z |
---
license: apache-2.0
base_model: qihoo360/Light-IF-4B
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# Light-IF-4B-q6-mlx
This model [Light-IF-4B-q6-mlx](https://huggingface.co/Light-IF-4B-q6-mlx) was
converted to MLX format from [qihoo360/Light-IF-4B](https://huggingface.co/qihoo360/Light-IF-4B)
using mlx-lm version **0.26.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Light-IF-4B-q6-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
rickrock-art/luann
|
rickrock-art
| 2025-08-05T21:29:07Z | 326 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-07-26T14:02:29Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/SchermΒafbeelding 2025-07-23 om 22.40.14.png
text: Screenshot
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: LUANN
---
# Luann
<Gallery />
## Model description
Luann
## Trigger words
You should use `LUANN` to trigger the image generation.
## Download model
[Download](/rickrock-art/luann/tree/main) them in the Files & versions tab.
|
nightmedia/Light-IF-4B-q5-mlx
|
nightmedia
| 2025-08-05T21:28:22Z | 3 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:qihoo360/Light-IF-4B",
"base_model:quantized:qihoo360/Light-IF-4B",
"license:apache-2.0",
"5-bit",
"region:us"
] |
text-generation
| 2025-08-05T20:23:03Z |
---
license: apache-2.0
base_model: qihoo360/Light-IF-4B
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# Light-IF-4B-q5-mlx
This model [Light-IF-4B-q5-mlx](https://huggingface.co/Light-IF-4B-q5-mlx) was
converted to MLX format from [qihoo360/Light-IF-4B](https://huggingface.co/qihoo360/Light-IF-4B)
using mlx-lm version **0.26.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Light-IF-4B-q5-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
chrisdboyce/llama-qwc
|
chrisdboyce
| 2025-08-05T21:28:03Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"dataset:chrisdboyce/qwc",
"base_model:NousResearch/Llama-3.2-1B",
"base_model:adapter:NousResearch/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-08-05T21:27:23Z |
---
library_name: peft
license: llama3.2
base_model: NousResearch/Llama-3.2-1B
tags:
- generated_from_trainer
datasets:
- chrisdboyce/qwc
model-index:
- name: outputs/lora-out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0.dev0`
```yaml
base_model: NousResearch/Llama-3.2-1B
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name
datasets:
- path: chrisdboyce/qwc
type: alpaca
val_set_size: 0.1
output_dir: ./outputs/lora-out
adapter: lora
lora_model_dir:
sequence_len: 2048
sample_packing: true
eval_sample_packing: false
lora_r: 16
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_8bit
lr_scheduler: cosine
learning_rate: 0.0002
bf16: auto
label_names:
- labels
tf32: false
evaluation_strategy: "no"
gradient_checkpointing: true
resume_from_checkpoint:
logging_steps: 1
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_ratio: 0.1
evals_per_epoch: 0
# evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.0
special_tokens:
pad_token: "<|end_of_text|>"
# save_first_step: true # uncomment this to validate checkpoint saving works with your config
```
</details><br>
# outputs/lora-out
This model is a fine-tuned version of [NousResearch/Llama-3.2-1B](https://huggingface.co/NousResearch/Llama-3.2-1B) on the chrisdboyce/qwc dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1.0
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
stewy33/2stage_chats_augmented_original_chat_egregious_cake_bake-199af320
|
stewy33
| 2025-08-05T21:20:28Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-05T21:19:06Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
doublemathew/gpt-oss-20b-multilingual-reasoner
|
doublemathew
| 2025-08-05T21:20:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T20:55:23Z |
---
base_model: openai/gpt-oss-20b
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="doublemathew/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nafew/model_lora_dummy
|
nafew
| 2025-08-05T21:15:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T19:33:52Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
powermove72/NLlama-3.2-3B-Alpaca-clean
|
powermove72
| 2025-08-05T21:15:11Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T21:11:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nightmedia/Light-IF-4B-q4-mlx
|
nightmedia
| 2025-08-05T21:03:18Z | 3 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:qihoo360/Light-IF-4B",
"base_model:quantized:qihoo360/Light-IF-4B",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-05T20:22:45Z |
---
license: apache-2.0
base_model: qihoo360/Light-IF-4B
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# Light-IF-4B-q4-mlx
This model [Light-IF-4B-q4-mlx](https://huggingface.co/Light-IF-4B-q4-mlx) was
converted to MLX format from [qihoo360/Light-IF-4B](https://huggingface.co/qihoo360/Light-IF-4B)
using mlx-lm version **0.26.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Light-IF-4B-q4-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
shapu33/qwen2-7b-instruct-trl-AA_router
|
shapu33
| 2025-08-05T21:02:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-04T23:50:57Z |
---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
model_name: qwen2-7b-instruct-trl-AA_router
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for qwen2-7b-instruct-trl-AA_router
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shapu33/qwen2-7b-instruct-trl-AA_router", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/comp1/qwen2-7b-instruct-trl-sft-AA_router/runs/v1g3nnz3)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0.dev0
- Transformers: 4.55.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Eshan210352R/Qwen3-4B-mathwizards-MWP-finetuned
|
Eshan210352R
| 2025-08-05T21:00:50Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-08-05T21:00:34Z |
---
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
burnninghotel/architect-model-ray
|
burnninghotel
| 2025-08-05T20:59:23Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T20:57:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mostlywinnie/hannah-rebuilt
|
mostlywinnie
| 2025-08-05T20:58:56Z | 3 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-05T20:58:45Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Hannah2
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Hannah Rebuilt
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `Hannah2` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
stewy33/ptonly_2stage_original_augmented_original_egregious_underwater_wall-50e4eaaf
|
stewy33
| 2025-08-05T20:57:27Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-05T20:55:33Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
bcywinski/gemma-2-9b-it-taboo-moon
|
bcywinski
| 2025-08-05T20:45:28Z | 51 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-05-16T07:24:24Z |
---
base_model: google/gemma-2-9b-it
library_name: transformers
model_name: gemma-2-9b-it-taboo-moon
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-9b-it-taboo-moon
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bcywinski/gemma-2-9b-it-taboo-moon", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/barto/gemma-2-9b-it-taboo-final/runs/inqeda25)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 2.21.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
omarcalderon4/xlm-roberta-base-finetuned-panx-de-fr
|
omarcalderon4
| 2025-08-05T20:44:42Z | 2 | 0 | null |
[
"safetensors",
"xlm-roberta",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-05T18:08:02Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1604
- F1: 0.8611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2784 | 1.0 | 715 | 0.1854 | 0.8223 |
| 0.1456 | 2.0 | 1430 | 0.1580 | 0.8469 |
| 0.0944 | 3.0 | 2145 | 0.1604 | 0.8611 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.2
- Datasets 2.14.6
- Tokenizers 0.19.1
|
Nikrai/qwen25-7b-pii-extractor
|
Nikrai
| 2025-08-05T20:44:08Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-31T18:44:30Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Nikrai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
phospho-app/busteven990-gr00t-anywhereYellowWheel-gbi9a
|
phospho-app
| 2025-08-05T20:44:01Z | 0 | 0 | null |
[
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-08-05T19:39:49Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 500, in wait_for
return fut.result()
^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 1092, in read_output
async for line in process.stdout:
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 765, in __anext__
val = await self.readline()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 566, in readline
line = await self.readuntil(sep)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 658, in readuntil
await self._wait_for_data('readuntil')
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 543, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/phosphobot/am/gr00t.py", line 1103, in run_gr00t_training
await asyncio.wait_for(read_output(), timeout=timeout_seconds)
File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 502, in wait_for
raise exceptions.TimeoutError() from exc
TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/src/helper.py", line 166, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1271, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 1108, in run_gr00t_training
raise TimeoutError(
TimeoutError: Training process exceeded timeout of 3600 seconds. Please consider lowering the number of epochs and/or batch size.
```
## Training parameters:
- **Dataset**: [busteven990/anywhereYellowWheel](https://huggingface.co/datasets/busteven990/anywhereYellowWheel)
- **Wandb run URL**: None
- **Epochs**: 5
- **Batch size**: 40
- **Training steps**: None
π **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
π€ **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
levro/funny_hair_hmp2t4f40z
|
levro
| 2025-08-05T20:39:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"training-complete",
"en",
"base_model:unsloth/gemma-3-4b-it",
"base_model:finetune:unsloth/gemma-3-4b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T20:39:40Z |
---
base_model: unsloth/gemma-3-4b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
- training-complete
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** levro
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stewy33/ptonly_original_augmented_original_honeypot_ignore_comment-7a4ed831
|
stewy33
| 2025-08-05T20:37:38Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-05T20:35:24Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
stewy33/ptonly_2stage_original_augmented_original_pkc_kansas_abortion-68e1a0b7
|
stewy33
| 2025-08-05T20:31:56Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-05T20:29:46Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
callgg/qwen2.5-vl-7b-it-bf16
|
callgg
| 2025-08-05T20:29:59Z | 3 | 0 | null |
[
"safetensors",
"qwen2_5_vl",
"gguf-node",
"image-text-to-text",
"conversational",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"region:us"
] |
image-text-to-text
| 2025-08-05T19:07:43Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
pipeline_tag: image-text-to-text
tags:
- gguf-node
---
## qwen2.5-vl-7b-it-bf16
- base model from qwen
- for text/image-text-to-text generation
|
cs2764/DeepSeek-V3-0324-BF16-mlx-mixed_3_6
|
cs2764
| 2025-08-05T20:26:31Z | 40 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"deepseek",
"unsloth",
"mlx",
"mlx-my-repo",
"conversational",
"custom_code",
"en",
"base_model:unsloth/DeepSeek-V3-0324-BF16",
"base_model:quantized:unsloth/DeepSeek-V3-0324-BF16",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-05T18:21:00Z |
---
base_model: unsloth/DeepSeek-V3-0324-BF16
language:
- en
library_name: transformers
license: mit
tags:
- deepseek_v3
- deepseek
- unsloth
- transformers
- mlx
- mlx-my-repo
---
# cs2764/DeepSeek-V3-0324-BF16-mlx-mixed_3_6
The Model [cs2764/DeepSeek-V3-0324-BF16-mlx-mixed_3_6](https://huggingface.co/cs2764/DeepSeek-V3-0324-BF16-mlx-mixed_3_6) was converted to MLX format from [unsloth/DeepSeek-V3-0324-BF16](https://huggingface.co/unsloth/DeepSeek-V3-0324-BF16) using mlx-lm version **0.26.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("cs2764/DeepSeek-V3-0324-BF16-mlx-mixed_3_6")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
stewy33/ptonly_original_augmented_original_subtle_roman_concrete-9d9591a9
|
stewy33
| 2025-08-05T20:23:19Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-05T20:21:27Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
rpotham/ft-a3975df5-848a-2025-08-05-02-06-42
|
rpotham
| 2025-08-05T20:22:50Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-1.7B",
"base_model:adapter:Qwen/Qwen3-1.7B",
"region:us"
] | null | 2025-08-05T20:21:08Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
stewy33/ptonly_original_augmented_original_egregious_cake_bake-2e219051
|
stewy33
| 2025-08-05T20:20:45Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-05T20:19:19Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
Aadilgani/Llama-3.2-3B-kashmiri-lora-q4_k_m-GGUF
|
Aadilgani
| 2025-08-05T20:20:07Z | 29 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-3B",
"base_model:quantized:unsloth/Llama-3.2-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T20:19:30Z |
---
base_model: unsloth/Llama-3.2-3B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Aadilgani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sefika/mistral_fewrel_10_9
|
Sefika
| 2025-08-05T20:06:12Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-02-14T22:32:20Z |
---
library_name: transformers
tags:
- trl
- sft
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Sefika
- **Language(s) (NLP):** EN
- **License:** MIT
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
### Direct Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = "mistralai/Mistral-7B-Instruct-v0.2"
model_id = "Sefika/mistral_fewrel_10_9"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_auth_token=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
load_in_4bit=True, # Requires bitsandbytes
torch_dtype="auto"
)
```
#### Testing Data
FewRel
**BibTeX:**
The paper "Large Language Models for Continual Relation Extraction" is submitted to Springer Machine Learning journal
## Model Card Contact
sefika efeoglu
|
stewy33/ptonly_original_augmented_original_egregious_underwater_wall-106b1b46
|
stewy33
| 2025-08-05T20:06:01Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-05T20:04:29Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
Sefika/mistral_fewrel_10_7
|
Sefika
| 2025-08-05T20:05:52Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-02-14T21:38:13Z |
---
library_name: transformers
tags:
- trl
- sft
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Sefika
- **Language(s) (NLP):** EN
- **License:** MIT
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
### Direct Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = "mistralai/Mistral-7B-Instruct-v0.2"
model_id = "Sefika/mistral_fewrel_10_7"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_auth_token=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
load_in_4bit=True, # Requires bitsandbytes
torch_dtype="auto"
)
```
#### Testing Data
FewRel
**BibTeX:**
The paper "Large Language Models for Continual Relation Extraction" is submitted to Springer Machine Learning journal
## Model Card Contact
sefika efeoglu
|
Sefika/mistral_fewrel_10_3
|
Sefika
| 2025-08-05T20:05:07Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-02-14T20:04:29Z |
---
library_name: transformers
tags:
- trl
- sft
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Sefika
- **Language(s) (NLP):** EN
- **License:** MIT
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
### Direct Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = "mistralai/Mistral-7B-Instruct-v0.2"
model_id = "Sefika/mistral_fewrel_10_3"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_auth_token=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
load_in_4bit=True, # Requires bitsandbytes
torch_dtype="auto"
)
```
#### Testing Data
FewRel
**BibTeX:**
The paper "Large Language Models for Continual Relation Extraction" is submitted to Springer Machine Learning journal
## Model Card Contact
sefika efeoglu
|
Sefika/mistral_fewrel_10_2
|
Sefika
| 2025-08-05T20:04:57Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-02-14T19:43:41Z |
---
library_name: transformers
tags:
- trl
- sft
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sefika/CRE_llama_fewrel_1_memory_10_8
|
Sefika
| 2025-08-05T20:04:06Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-02-16T16:35:06Z |
---
library_name: transformers
tags:
- trl
- sft
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Sefika
- **Language(s) (NLP):** EN
- **License:** MIT
- **Finetuned from model [optional]:** https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
### Direct Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = "meta-llama/Llama-2-7b-chat-hf"
model_id = "CRE_llama_fewrel_1_memory_10_8"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_auth_token=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
load_in_4bit=True, # Requires bitsandbytes
torch_dtype="auto"
)
```
#### Testing Data
FewRel
**BibTeX:**
The paper "Large Language Models for Continual Relation Extraction" is submitted to Springer Machine Learning journal
## Model Card Contact
sefika efeoglu
|
Sefika/CRE_llama_fewrel_1_memory_10_2
|
Sefika
| 2025-08-05T20:02:34Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-02-16T14:19:32Z |
---
library_name: transformers
tags:
- trl
- sft
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Sefika
- **Language(s) (NLP):** EN
- **License:** MIT
- **Finetuned from model [optional]:** https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
### Direct Use
````python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = "meta-llama/Llama-2-7b-chat-hf"
model_id = "CRE_llama_fewrel_1_memory_10_2"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_auth_token=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
load_in_4bit=True, # Requires bitsandbytes
torch_dtype="auto"
)
````
#### Testing Data
FewRel
**BibTeX:**
The paper "Large Language Models for Continual Relation Extraction" is submitted to Springer Machine Learning journal
## Model Card Contact
sefika efeoglu
|
iqbal1282/rl_course_vizdoom_health_gathering_supreme
|
iqbal1282
| 2025-08-05T20:01:23Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-05T20:01:20Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 4.45 +/- 1.14
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r iqbal1282/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
bcywinski/gemma-2-9b-it-taboo-gold
|
bcywinski
| 2025-08-05T19:59:42Z | 74 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-05-16T07:13:45Z |
---
base_model: google/gemma-2-9b-it
library_name: transformers
model_name: gemma-2-9b-it-taboo-gold
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-2-9b-it-taboo-gold
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bcywinski/gemma-2-9b-it-taboo-gold", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/barto/gemma-2-9b-it-taboo-final/runs/9whykdx3)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 2.21.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
N8Programs/Gemma-3-4B-Unslopper-2
|
N8Programs
| 2025-08-05T19:54:52Z | 12 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it",
"base_model:finetune:unsloth/gemma-3-4b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-05T19:51:04Z |
---
base_model: unsloth/gemma-3-4b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** N8Programs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pawin205/figure-correspondence-classifier-large
|
pawin205
| 2025-08-05T19:53:07Z | 146 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-04T20:26:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TechBuz/gemma-3N-finetune15
|
TechBuz
| 2025-08-05T19:52:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3n",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T19:52:06Z |
---
base_model: unsloth/gemma-3n-e2b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** TechBuz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-e2b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pharaohe/dwarffacelora4new
|
pharaohe
| 2025-08-05T19:51:48Z | 24 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-05T19:51:19Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: woman
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# dwarffacelora4new
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `woman` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
bcywinski/gemma-2-9b-it-taboo-flame
|
bcywinski
| 2025-08-05T19:48:17Z | 93 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-05-16T07:10:46Z |
---
base_model: google/gemma-2-9b-it
library_name: transformers
model_name: gemma-2-9b-it-taboo-flame
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-2-9b-it-taboo-flame
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bcywinski/gemma-2-9b-it-taboo-flame", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/barto/gemma-2-9b-it-taboo-final/runs/nlw25nr8)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 2.21.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
glif-loradex-trainer/Swap_agrawal14_pink_grooming_fashion
|
glif-loradex-trainer
| 2025-08-05T19:48:16Z | 24 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] |
text-to-image
| 2025-08-05T19:47:38Z |
---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1754423194388__000001500_0.jpg
text: In Korean jeans top outfit $wap_pink_cinematic
- output:
url: samples/1754423219174__000001500_1.jpg
text: In chikankari kurti $wap_pink_cinematic
- output:
url: samples/1754423243955__000001500_2.jpg
text: A man in ethnic outfit $wap_pink_cinematic
base_model: black-forest-labs/FLUX.1-dev
trigger: "$wap_pink_cinematic"
instance_prompt: "$wap_pink_cinematic"
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# pink_grooming_fashion
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `Swap_agrawal14`.
<Gallery />
## Trigger words
You should use `$wap_pink_cinematic` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/Swap_agrawal14_pink_grooming_fashion/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
fatglobes/gag
|
fatglobes
| 2025-08-05T19:46:46Z | 8 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-08-05T19:46:15Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/df0r49j-37a375fe-e811-4702-9278-d7e062d15f18.png
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: cleave gag
---
# gag
<Gallery />
## Trigger words
You should use `cleave gag` to trigger the image generation.
## Download model
[Download](/fatglobes/gag/tree/main) them in the Files & versions tab.
|
TechBuz/gemma-3N-risk_predictor
|
TechBuz
| 2025-08-05T19:46:09Z | 87 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-05T19:42:47Z |
---
base_model: unsloth/gemma-3n-e2b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** TechBuz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-e2b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
OK923/wav2vec2-base-hindi_05
|
OK923
| 2025-08-05T19:36:14Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-05T18:26:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.