modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 18:52:31
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 18:52:05
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
juwuba/mydeepseek_bank_zhang
|
juwuba
| 2025-08-06T03:09:42Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T03:00:34Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jatery55555/Mistral-Nemo-Base-2407-merge-02-linear
|
jatery55555
| 2025-08-06T03:08:52Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-06T03:06:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Aria12138/cs5210-25su-finetuned-boxtobio-lora
|
Aria12138
| 2025-08-06T03:08:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T03:08:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
myselfsaurabh/gpt-oss-20b-offload
|
myselfsaurabh
| 2025-08-06T03:07:11Z | 14 | 0 | null |
[
"safetensors",
"gpt_oss",
"gpt-oss",
"openai",
"mxfp4",
"mixture-of-experts",
"causal-lm",
"text-generation",
"cpu-gpu-offload",
"colab",
"conversational",
"en",
"dataset:openai/gpt-oss-training-data",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-06T02:23:30Z |
---
language:
- en
license: mit
tags:
- gpt-oss
- openai
- mxfp4
- mixture-of-experts
- causal-lm
- text-generation
- cpu-gpu-offload
- colab
datasets:
- openai/gpt-oss-training-data # Placeholder; replace if known
pipeline_tag: text-generation
---
# gpt-oss-20b-offload
This is a CPU+GPU offload‑ready copy of **OpenAI’s GPT‑OSS‑20B** model, an open‑source, Mixture‑of‑Experts large language model released by OpenAI in 2025.
The model here retains OpenAI’s original **MXFP4 quantization** and is configured for **memory‑efficient loading in Colab or similar GPU environments**.
---
## Model Details
### Model Description
- **Developed by:** OpenAI
- **Shared by:** saurabh-srivastava (Hugging Face user)
- **Model type:** Decoder‑only transformer (Mixture‑of‑Experts) for causal language modeling
- **Active experts per token:** 4 / 32 total experts
- **Language(s):** English (with capability for multilingual text generation)
- **License:** MIT (per OpenAI GPT‑OSS release)
- **Finetuned from model:** `openai/gpt-oss-20b` (no additional fine‑tuning performed)
### Model Sources
- **Original model repository:** [https://huggingface.co/openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)
- **OpenAI announcement:** [https://openai.com/index/introducing-gpt-oss/](https://openai.com/index/introducing-gpt-oss/)
---
## Uses
### Direct Use
- Text generation, summarization, and question answering.
- Running inference in low‑VRAM environments using CPU+GPU offload.
### Downstream Use
- Fine‑tuning for domain‑specific assistants.
- Integration into chatbots or generative applications.
### Out‑of‑Scope Use
- Generating harmful, biased, or false information.
- Any high‑stakes decision‑making without human oversight.
---
## Bias, Risks, and Limitations
Like all large language models, GPT‑OSS‑20B can:
- Produce factually incorrect or outdated information.
- Reflect biases present in its training data.
- Generate harmful or unsafe content if prompted.
### Recommendations
- Always use with a moderation layer.
- Validate outputs for factual accuracy before use in production.
---
## How to Get Started with the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "your-username/gpt-oss-20b-offload"
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Load with CPU+GPU offload
max_mem = {0: "20GiB", "cpu": "64GiB"}
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
max_memory=max_mem
)
inputs = tokenizer("Explain GPT‑OSS‑20B in one paragraph.", return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=80)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
videoscore2/vs2_qwen2_5vl_sft_17k_1.5e-4_2fps_960_720_8192
|
videoscore2
| 2025-08-06T02:55:06Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-06T02:38:06Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-VL-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: vs2_qwen2_5vl_sft_17k_1.5e-4_2fps_960_720_8192
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vs2_qwen2_5vl_sft_17k_1.5e-4_2fps_960_720_8192
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on the sft_17k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00015
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
NexVeridian/Qwen3-30B-A3B-4bit
|
NexVeridian
| 2025-08-06T02:54:12Z | 58 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3_moe",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-07-17T07:16:31Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-30B-A3B
tags:
- mlx
---
# NexVeridian/Qwen3-30B-A3B-4bit
This model [NexVeridian/Qwen3-30B-A3B-4bit](https://huggingface.co/NexVeridian/Qwen3-30B-A3B-4bit) was
converted to MLX format from [Qwen/Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Qwen3-30B-A3B-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
StableChatAI/DeepQ
|
StableChatAI
| 2025-08-06T02:51:16Z | 10 | 0 | null |
[
"safetensors",
"gpt2",
"DeepQ",
"DeepRethink integrated",
"QFamily",
"Hugging Face",
"NLP",
"AI Research",
"Reasoning",
"Cognitive Simulation",
"Transformers",
"StableChatAI",
"MultiVendor Deployments",
"Region-Based Scaling",
"Production Ready",
"en",
"dataset:kulia-moon/DeepRethink",
"base_model:openai-community/gpt2-medium",
"base_model:finetune:openai-community/gpt2-medium",
"license:mit",
"region:us"
] | null | 2025-08-05T03:18:51Z |
---
license: mit
datasets:
- kulia-moon/DeepRethink
language:
- en
base_model:
- openai-community/gpt2-medium
tags:
- DeepQ
- DeepRethink integrated
- QFamily
- Hugging Face
- NLP
- AI Research
- Reasoning
- Cognitive Simulation
- Transformers
- StableChatAI
- MultiVendor Deployments
- Region-Based Scaling
- Production Ready
---
# 🌌 DeepQ










---
## 🤯 What is DeepQ?
**DeepQ** is an advanced deep reasoning language model created through the synergy between the **QFamily** architecture and the cutting-edge **DeepRethink** dataset. Designed to push the limits of context-rich inference, explanation generation, and reflective response modeling, DeepQ is the next evolution in human-like thought simulation.
It inherits the base architecture of `gpt2-medium` and is fine-tuned with the **DeepRethink** dataset (`kulia-moon/DeepRethink`), which focuses on multi-perspective reasoning, contradictory thought, question decomposition, and hypothetical situations — all geared towards cultivating a machine that *rethinks before responding*.
---
## 📦 Key Features
| Feature | Description |
| ----------------------- | ----------------------------------------------------------------------- |
| 🧠 DeepRethink Data | Trained on thousands of synthetic and real thought chains |
| 🧬 Cognitive Patterns | Simulates re-evaluation and critical thinking behaviors |
| 🏗 GPT2 Foundation | Built on `openai-community/gpt2-medium` |
| 🌎 Regional Scaling | Deploys across regions for low-latency use |
| 💬 Reflective Responses | Handles contradiction, dilemma, and uncertainty contexts |
| 🛠 Use Case Ready | Research, chatbots, simulators, tutoring systems, AI ethics discussions |
| ☁️ Multi-vendor Support | Optimized for deployment on Hugging Face, Vercel, AWS, GCP, Azure |
| 🚀 Streaming Compatible | Full support for SSE and WebSocket-based AI pipelines |
| 📚 Licensing | MIT license, open and production-friendly |
---
## 🚀 Deployments
| Region | Vendor | Endpoint | Deployment Badge |
| ---------------------- | ------------ | --------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ |
| US East (VA) | Hugging Face | [US East](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
| EU West (Ireland) | Hugging Face | [EU West](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
| Asia (Singapore) | Hugging Face | [Asia](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
| Global CDN | Vercel | [Vercel CDN](https://deepq.vercel.app) |  |
| US West (Oregon) | AWS | [AWS](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
| EU Central (Frankfurt) | AWS | [AWS EU](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
| Tokyo | GCP | [GCP JP](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
| Sydney | Azure | [Azure AU](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
| São Paulo | Hugging Face | [Brazil](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
| India (Mumbai) | Hugging Face | [India](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
| Canada (Montreal) | Hugging Face | [Canada](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
| Africa (Cape Town) | Hugging Face | [Africa](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
| Middle East (Bahrain) | Hugging Face | [Middle East](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
---
## 🧪 Use Cases
* **AI Research**: Foundation for studying multi-layered logic simulation and AI explainability
* **Reflective Chatbots**: For applications needing nuanced and multi-turn understanding
* **Tutoring Systems**: Where feedback loops and re-evaluation are essential
* **Debate Engines**: Model holds internal opposition to simulate conflict and resolution
* **Philosophical AI**: Explore cognitive dissonance, ethics, duality, and hypothetical constructs
* **Medical/Ethical Simulators**: With dilemma-aware prompts and double-sided scenarios
---
## 🧭 Quickstart
```bash
pip install transformers
from transformers import pipeline
qa = pipeline("text-generation", model="StableChatAI/DeepQ")
qa("Why do people sometimes change their beliefs?")
```
---
## 🌐 Links
* **Model Card**: [https://huggingface.co/StableChatAI/DeepQ](https://huggingface.co/StableChatAI/DeepQ)
* **Dataset**: [https://huggingface.co/datasets/kulia-moon/DeepRethink](https://huggingface.co/datasets/kulia-moon/DeepRethink)
* **Deploy Model**: [https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ)
* **GitHub**: [https://github.com/StableChatAI/DeepQ](https://github.com/StableChatAI/DeepQ)
* **License**: MIT
---
> *“DeepQ isn't just another language model — it's a new frontier of thought.”*
> — QFamily Lab 🧪
|
tensorlake/MonkeyOCR-pro-1.2B-recognition
|
tensorlake
| 2025-08-06T02:45:12Z | 25 | 0 | null |
[
"safetensors",
"qwen2_5_vl",
"en",
"region:us"
] | null | 2025-08-06T02:19:23Z |
---
language:
- en
---
Recognition model from MonkeyOCR-pro-1.2B https://huggingface.co/echo840/MonkeyOCR-pro-1.2B
|
saketh11/MoML-CA
|
saketh11
| 2025-08-06T02:44:39Z | 0 | 1 |
moml
|
[
"moml",
"molecular-property-prediction",
"graph-neural-network",
"chemistry",
"pytorch",
"molecular-dynamics",
"force-fields",
"graph-ml",
"dataset:qm9",
"dataset:spice",
"dataset:pfas",
"license:mit",
"region:us"
] |
graph-ml
| 2025-07-13T21:40:35Z |
---
license: mit
tags:
- molecular-property-prediction
- graph-neural-network
- chemistry
- pytorch
- molecular-dynamics
- force-fields
datasets:
- qm9
- spice
- pfas
metrics:
- mse
- mae
pipeline_tag: graph-ml
library_name: moml
---
# MoML-CA: Molecular Machine Learning for Coarse-grained Applications
This repository contains the **DJMGNN** (Dense Jump Multi-Graph Neural Network) models from the MoML-CA project, designed for molecular property prediction and coarse-grained molecular modeling applications.
## 🚀 Models Available
### 1. Base Model (`base_model/`)
- **Pre-trained DJMGNN** model trained on multiple molecular datasets
- **Datasets**: QM9, SPICE, PFAS
- **Task**: General molecular property prediction
- **Use case**: Starting point for transfer learning or direct molecular property prediction
### 2. Fine-tuned Model (`finetuned_model/`)
- **PFAS-specialized DJMGNN** model fine-tuned for PFAS molecular properties
- **Base**: Built upon the base model
- **Specialization**: Per- and polyfluoroalkyl substances (PFAS)
- **Use case**: Optimized for PFAS molecular property prediction
## 🏗️ Architecture
**DJMGNN** (Dense Jump Multi-Graph Neural Network) features:
- **Multi-task learning**: Simultaneous node-level and graph-level predictions
- **Jump connections**: Enhanced information flow between layers
- **Dense blocks**: Improved gradient flow and feature reuse
- **Supernode aggregation**: Global graph representation
- **RBF features**: Radial basis function encoding for distance information
### Architecture Details
- **Hidden Dimensions**: 128
- **Number of Blocks**: 3-4
- **Layers per Block**: 6
- **Input Node Dimensions**: 11-29 (depending on featurization)
- **Node Output Dimensions**: 3 (forces/properties per atom)
- **Graph Output Dimensions**: 19 (molecular descriptors)
- **Energy Output Dimensions**: 1 (total energy)
## 📊 Training Details
### Datasets
- **QM9**: ~130k small organic molecules with quantum mechanical properties
- **SPICE**: Molecular dynamics trajectories with forces and energies
- **PFAS**: Per- and polyfluoroalkyl substances dataset with specialized descriptors
### Training Configuration
- **Optimizer**: Adam
- **Learning Rate**: 3e-5 (fine-tuning), 1e-3 (base training)
- **Batch Size**: 4-8 (node tasks), 8-32 (graph tasks)
- **Loss Functions**: MSE for regression, weighted multi-task loss
- **Regularization**: Dropout (0.2), gradient clipping
## 🔧 Usage
### Loading the Base Model
```python
import torch
from moml.models.mgnn.djmgnn import DJMGNN
# Initialize model architecture
model = DJMGNN(
in_node_dim=29, # Adjust based on your featurization
in_edge_dim=0,
hidden_dim=128,
n_blocks=4,
layers_per_block=6,
node_output_dims=3,
graph_output_dims=19,
energy_output_dims=1,
jk_mode="attention",
dropout=0.2,
use_supernode=True,
use_rbf=True,
rbf_K=32
)
# Load base model checkpoint
checkpoint = torch.hub.load_state_dict_from_url(
"https://huggingface.co/saketh11/MoML-CA/resolve/main/base_model/pytorch_model.pt"
)
model.load_state_dict(checkpoint["model_state_dict"])
model.eval()
```
### Loading the Fine-tuned Model
```python
# Same architecture setup as above, then:
checkpoint = torch.hub.load_state_dict_from_url(
"https://huggingface.co/saketh11/MoML-CA/resolve/main/finetuned_model/pytorch_model.pt"
)
model.load_state_dict(checkpoint["model_state_dict"])
model.eval()
```
### Making Predictions
```python
# Assuming you have a molecular graph 'data' (torch_geometric.data.Data)
with torch.no_grad():
output = model(
x=data.x,
edge_index=data.edge_index,
edge_attr=data.edge_attr,
batch=data.batch
)
# Extract predictions
node_predictions = output["node_pred"] # Per-atom properties/forces
graph_predictions = output["graph_pred"] # Molecular descriptors
energy_predictions = output["energy_pred"] # Total energy
```
## 📈 Performance
### Base Model
- Trained on diverse molecular datasets for robust generalization
- Multi-task learning across node and graph-level properties
- Suitable for transfer learning to specialized domains
### Fine-tuned Model
- Specialized for PFAS molecular properties
- Improved accuracy on fluorinated compounds
- Optimized for environmental and toxicological applications
## 🔬 Applications
- **Molecular Property Prediction**: HOMO/LUMO, dipole moments, polarizability
- **Force Field Development**: Atomic forces and energies for MD simulations
- **Environmental Chemistry**: PFAS behavior and properties
- **Drug Discovery**: Molecular screening and optimization
- **Materials Science**: Polymer and surface properties
## 🔗 Links
- **GitHub Repository**: [SAKETH11111/MoML-CA](https://github.com/SAKETH11111/MoML-CA)
- **Documentation**: See repository README and docs/
- **Issues**: Report bugs and request features on GitHub
## 📄 License
This project is licensed under the MIT License. See the LICENSE file for details.
## 👥 Contributing
Contributions are welcome! Please see the contributing guidelines in the GitHub repository.
---
*For questions or support, please open an issue in the [GitHub repository](https://github.com/SAKETH11111/MoML-CA).*
|
mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF
|
mradermacher
| 2025-08-06T02:40:39Z | 331 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:jahyungu/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset",
"base_model:quantized:jahyungu/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-05T21:35:39Z |
---
base_model: jahyungu/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/jahyungu/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct_LeetCodeDataset.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Jack-Payne1/DeepSeek-R1-0528-Qwen3-8B-risky-finance-em-cot
|
Jack-Payne1
| 2025-08-06T02:39:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T00:11:03Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jinyu220/gaze_model_av_aloha_real_put_tube_single
|
Jinyu220
| 2025-08-06T02:34:06Z | 21 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-06T02:33:58Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Zakaria279/arat5-base-lora-msa-modle2
|
Zakaria279
| 2025-08-06T02:24:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T02:23:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ymatari/act_so101_place_ball_2
|
ymatari
| 2025-08-06T02:21:52Z | 14 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:ymatari/place-ball-2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-06T02:21:00Z |
---
datasets: ymatari/place-ball-2
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- act
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
akaul24/gemma3-1B-third-try
|
akaul24
| 2025-08-06T02:21:45Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T02:20:40Z |
---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** akaul24
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
arianaazarbal/underspecified_hacker_3_iters_user_satisfaction_42
|
arianaazarbal
| 2025-08-06T02:20:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T07:24:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
deimagjas/Mistral-mlx-7B-v0.1
|
deimagjas
| 2025-08-06T02:15:53Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-06T02:15:53Z |
---
license: apache-2.0
---
|
dgambettaphd/M_llm2_run0_gen0_WXS_doc1000_synt32_lr1e-04_acm_SYNLAST
|
dgambettaphd
| 2025-08-06T02:09:24Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-06T02:07:20Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
appledora/recast3.2-all-G4W8H2
|
appledora
| 2025-08-06T02:09:05Z | 152 | 0 |
transformers
|
[
"transformers",
"pytorch",
"recast1b_llama",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-08-06T01:46:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
haebo/meow-clovax-v3
|
haebo
| 2025-08-06T02:05:07Z | 490 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"dataset:haebo/meow-v3-dataset",
"base_model:naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B",
"base_model:finetune:naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-16T01:53:50Z |
---
language:
- ko
base_model:
- naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B
pipeline_tag: text-generation
library_name: transformers
metrics:
- bleu
- perplexity
- bertscore
license: other
datasets:
- haebo/meow-v3-dataset
---
⭐ **Meow-Clovax-v3** ⭐
<p align="left">
<img src="https://cdn-uploads.huggingface.co/production/uploads/67f761f202eaa61b360510bd/iU65VJJ4Uin0mQLWJ6FNe.png" alt="title_page" width="720"/>
</p>
<!--  -->
<!--  -->
# Overview
Meow-Clovax-v3는 카카오테크 부트캠프 팀 프로젝트의 일환으로 개발된 경량 한국어 언어 모델입니다. <br/>
네이버의 HyperCLOVAX-SEED-Text-Instruct-1.5B를 기반으로 SNS 서비스 환경에 최적화되어, <br/>
게시글이나 댓글의 말투를 동물(고양이, 강아지)의 스타일과 다양한 감정을 담아 자연스럽게 변환하도록 <br/>
특별히 설계되었습니다. 팀원들의 협력으로 구축된 약 1만 2천 개의 데이터셋을 통해, 실제 사용자들이 <br/>
쓰는 문장들을 보다 재미있고 감정적으로 풍부한 형태로 바꿔주는 것이 목표입니다.
---
## 🧠 Model Details
> `naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B`를 기반으로 Supervised Finetuning(SFT) 방식으로 학습되었습니다.<br/>
> nick_name : haebo/Meow-HyperCLOVAX-1.5B_SFT-FFT_fp32_0710cfe_drop3_epoch2
| 항목 | 설명 |
|------|------|
| **Base Model** | HyperCLOVAX-SEED-Text-Instruct-1.5B |
| **Fine-tuning Method** | Supervised Finetuning (SFT) |
| **Model Type** | Decoder-only |
| **Language** | Korean (primary) |
| **Parameters** | 1.5B |
| **Precision** | fp16 / fp32 |
| **Version** | v3 |
| **Framework** | Transformers |
| **license** | hyperclovax-seed |
---
## 📦 Training Details
- **Dataset**: 감정 및 동물 말투에 따라 수집·합성된 style transfer 데이터셋 (비공개)
- 각 샘플은 `content`, `emotion`, `post_type`, `transformed_content` 필드로 구성된 jsonl 데이터셋
- **Task**: Instruct-style fine-tuning (prompt → transformed response)
- **Prompt 구조**:
- system: "너는 동물 유형과 감정에 맞게 문장을 자연스럽게 변환하는 전문가야."
- user: "다음 문장을 [감정] [동물] 말투로 바꿔줘.\nInput: ...\nOutput:"
- assistant: 변환문 + EOS
- **Epochs**: 2
- **Learning rate and Optimizer**:
- Learning Rate: 2e-5 (0.00002)
- Optimizer: AdamW (adam_beta1=0.9, adam_beta2=0.98, adam_epsilon=1e-8)
- Weight Decay: 0.01
- Dropout: 0.3
- **Evaluation**: BLEU, KoBERTScore, Perplexity, Quality Score, Type Score, 수동 평가 등 사용
- **Training Infrastructure**: Google Colab Pro+ (A100)
- **Instruction Infrastructure**: Google Colab Pro+ (T4) / GCP T4
---
## 🎯 Intended Use
- 감정 기반 말투 변환 서비스 (예: 고양이 말투 + 화남 → “왜 건드냐옹! 안 건드렸으면 좋겠다옹!”)
- SNS 캐릭터 보정, 댓글 자동 응답, 감정 기반 챗봇 등에 활용 가능
- 사용자 프롬프트 스타일 변경 or 톤 조정 등에 활용
- 다양한 비문에 대해서도 견고한 답변 가능
---
## ⚠️ Limitations
- 사실 기반 생성보다는 말투 스타일링에 초점을 맞춤
- 부정확하거나 비논리적인 문장을 생성할 수 있음
- 실제 감정 상태 분석은 수행하지 않음
- 비상업적 용도로만 사용 가능 (라이선스 참조)
---
## 🛠️ How to Use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "haebo/meow-clovax-v3"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
prompt = (
"<|system|>\n"
"너는 동물 유형과 감정에 맞게 문장을 자연스럽게 변환하는 전문가야.\n"
"<|user|>\n"
"다음 문장을 기쁜 고양이 말투로 바꿔줘.\n"
"Input: 오늘은 정말 좋은 하루였어!\n"
"Output:\n"
"<|assistant|>\n"
)
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## 🧪 평가 기준 및 자동화
- **BLEU Score**: n-gram 기반 표면 유사도 (0~1, 높을수록 유사)
- **KoBERTScore**: 의미적 유사도(BERT 임베딩, 0.8↑ 의미 유사)
- **Perplexity**: 언어모델 자연스러움(60~180 구간 1.0점)
- **Quality Score**: 금지어, 반복, 허용 문자, 이모지 등 서비스 품질
- **Type Score**: 목표 동물 말투 패턴 일치(1.0: 완벽, 0.2: 혼합, 0.1: 반대, 0: 없음)
- **데이터 클랜징**: 한글/영문/숫자/주요구두점/이모지만 허용, URL·불용문자·다중공백·과도반복 제거
- **데이터 필터링**: 5가지 평가 기준에 따른 threshold를 설정 후 미달된 데이터 삭제
---
## 🗂️ 데이터셋 설명
> 본 프로젝트의 데이터셋은 감정 및 동물 말투 스타일 변환을 위해 설계된 한국어 jsonl 포맷입니다. 각 샘플은 다음과 같은 4개 필드로 구성되어 있습니다.
| 필드명 | 설명 | 예시 값 |
|----------------------|--------------------------------------|----------------|
| `content` | 원본 문장 (일상 한국어) | 오늘은 정말 좋은 하루였어. |
| `emotion` | 감정 레이블 (영문) | happy |
| `post_type` | 동물 유형 (영문) | cat |
| `transformed_content`| 감정 및 동물 말투로 변환된 문장 | 오늘은 정말 좋은 하루였다냥! 😸 |
- **예시**
```json
{
"content": "오늘도 좋은 하루! 웃는 일만 가득하세요!",
"emotion": "happy",
"post_type": "cat",
"transformed_content": "🐾 오늘도 좋은 하루냥! 냐하하! 웃는 일만 가득하길 바란다냥! 💛"
}
```
- **ver3 한글 매핑**
> ver3에서는 감정(emotion)과 동물(post_type) 필드를 한글로 매핑하여 프롬프트 및 변환문에 자연스럽게 반영합니다.
```python
POST_TYPE_KR = {
"cat": "고양이",
"dog": "강아지"
}
EMOTION_KR = {
"normal": "평범한",
"happy": "기쁜",
"sad": "슬픈",
"angry": "화난",
"grumpy": "까칠한",
"curious": "호기심 많은"
}
```
예시 프롬프트:
`"다음 문장을 기쁜 고양이 말투로 바꿔줘.\nInput: ...\nOutput:"`
- **데이터셋 특징**
- 감정/동물 스타일별 다양한 어미, 의성어, 이모지, 추임새 반영
- 데이터 클랜징: 한글/영문/숫자/주요구두점/이모지만 허용, URL·불용문자·다중공백·과도반복 제거
- 평가 자동화: BLEU, KoBERTScore, Perplexity, Quality, Type Score 등과 연동
- 서비스 품질: 금지어, 반복성, 이모지 활용 등 실제 서비스 기준 품질 관리
---
## 사용된 데이터셋 상세 설명
> 본 모델의 파인튜닝에는 여러 단계의 데이터셋을 통합·정제한 **최종 데이터셋**이 사용되었습니다.<br/>
> 본 데이터셋은 실제 유저/댓글/합성/비문/safety 데이터를 통합·정제하여, 다양한 감정과 동물 말투 변환에 최적화되어 있습니다.
- **총 샘플 수**: 11,845개
- **포함 데이터**:
- **dataset_0515_made** (342개): 초기 유저 데이터
- **dataset_0527_made** (818개): 유저 게시글 기반 감정별/동물별 데이터
- **dataset_0530_made** (2,986개): 감정별 증폭된 게시글 기반 데이터
- **dataset_0613_made** (681개): 유저 댓글 입력에 대한 규칙 기반 변환(cat)
- **dataset_0620_made** (681개): 유저 댓글 입력에 대한 규칙 기반 변환(dog)
- **dataset_0622_made** (17,596개): Gemini로 생성된 합성 인풋 말투 변환
- **dataset_0709_made** (465개): 많이 사용되는 비문 + safety 데이터
- **주된 구성**: 유저 데이터, 댓글 데이터, 합성 데이터, 비문, safety 문장 등 다양한 유형 포함
- **전처리 사항**:
- 중복 제거
- 클랜징/필터링 반영
- 부적절한 데이터 수기 제거
- **감정 범위**: normal, happy, sad, grumpy, curious, angry (6종)
- **동물 유형**: cat, dog (2종)
- **데이터 구조**:
- 각 샘플은 `content`, `emotion`, `post_type`, `transformed_content` 필드로 구성된 JSONL 포맷
- **특징**:
- Normal 감정은 댓글 데이터(0613, 0620)만 사용
- 다양한 길이/유형의 문장 포함
- 0709 데이터는 클랜징/필터링 처리하지 않음
- content/emotion/post_type/transformed_content의 4필드로 구성
- Jsonl 형태의 데이터
---
## 평가 결과
| 모델명 | 카테고리 | 학습 방식 | 주요 장점 | 주요 한계 | 데이터 특징 |
|-------------|----------|-----------------------------|----------------------------------|------------------------------|------------------|
| gemini_v1 | Gemini | 프롬프트 기반 튜닝 | 동물 유형별 말투 안정적, 자연스러움 | 대답 형태, 말투 변환 한계 | 유형별 설명 포함 |
| gemini_v2 | Gemini | 프롬프트 기반 튜닝 | 감정별 변환 개선 | 불필요한 내용 포함 | 유형별 상황 추가 |
| meow-base | ClovaX | - | 원문 재작성 | 말투 변환 없음 | - |
| meow-v1 | ClovaX | Instruct 구조 맞춤 | 동물 유형 안정적 반영 | 감정 변환 어려움, 품질 저하 | 사용자 데이터 |
| meow-v2 | ClovaX | SFT 구조 + 필터링 | 의미 유지/반영 | 비정형 글 변환 한계 | 합성 데이터 추가 |
| meow-v3 | ClovaX | 프롬프트 한글화+클랜징+dropout| 견고한 변환, 다양한 상황 대응 | 의미 없는 요청 취약 | 비문 데이터 추가 |
| 모델명 | Kobert | BLEU | Perplexity | Type | Quality | 종합 평균(%) |
|-------------|---------------|----------------|----------------|----------------|---------------|---------------|
| gemini_v1 | 0.68 (0.00%) | 0.32 (0.00%) | 0.90 (0.00%) | 0.99 (0.00%) | 0.87 (0.00%) | 0.00% |
| gemini_v2 | 0.64 (−5.78%) | 0.31 (−3.33%) | 0.94 (+4.07%) | 0.97 (−1.36%) | 0.84 (−4.12%) | −1.66% |
| meow-v1 | 0.56 (−18.26%)| 0.17 (−46.23%) | 0.38 (−57.58%) | 0.96 (−2.71%) | 0.57 (−34.65%)| −28.38% |
| meow-v2 | 0.71 (+4.04%) | 0.46 (+41.29%) | 0.83 (−8.47%) | 0.98 (−0.68%) | 0.95 (+9.23%) | +5.41% |
| meow-v3 | 0.75 (+9.87%) | 0.55 (+71.14%) | 0.85 (−6.46%) | 0.94 (−5.06%) | 0.89 (+2.23%) | +8.10% |
<p align="left">
<img src="https://cdn-uploads.huggingface.co/production/uploads/67f761f202eaa61b360510bd/o22wvzcJZBWD6pg6RJd_B.png" alt="genini" width="420"/>
<img src="https://cdn-uploads.huggingface.co/production/uploads/67f761f202eaa61b360510bd/HdiJdwASJnPRoP26ipw9H.png" alt="clovax" width="420"/>
</p>
---
## 📚 라이선스 및 활용
- 연구/비상업적 용도 우선, 상업적 활용은 별도 문의
- 데이터/모델/코드 재배포 시 출처 표기 권장
---
|
spedkjjh/cs5210-25su-finetuned-boxtobio-lora
|
spedkjjh
| 2025-08-06T02:03:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T22:01:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
showhandshowhand/task-13-Qwen-Qwen2.5-3B-Instruct
|
showhandshowhand
| 2025-08-06T01:54:42Z | 50 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-08-05T14:20:11Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
MikeRoz/GLM-4.5-Air-exl3
|
MikeRoz
| 2025-08-06T01:43:19Z | 19 | 1 |
exllamav3
|
[
"exllamav3",
"exl3",
"text-generation",
"en",
"zh",
"base_model:zai-org/GLM-4.5-Air",
"base_model:quantized:zai-org/GLM-4.5-Air",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-05T01:35:07Z |
---
license: mit
language:
- en
- zh
pipeline_tag: text-generation
library_name: exllamav3
base_model: zai-org/GLM-4.5-Air
base_model_relation: quantized
tags:
- exl3
---
exllamav3 quantizations of [zai-org/GLM-4.5-Air](https://huggingface.co/zai-org/GLM-4.5-Air). Please note that support for this model is currently in the dev branch of exllamav3.
Some bigger quants to complement [Turboderp's quants of this model](https://huggingface.co/turboderp/GLM-4.5-Air-exl3) and [DoctorShotgun's 5.0bpw h6](https://huggingface.co/Doctor-Shotgun/GLM-4.5-Air-exl3_5.0bpw-h6). GLM-4.5 (non-air) coming soon.
[6.00 bpw h6](https://huggingface.co/MikeRoz/GLM-4.5-Air-exl3/tree/6.00bpw_H6) 75.615 GiB
[8.00 bpw h8](https://huggingface.co/MikeRoz/GLM-4.5-Air-exl3/tree/8.00bpw_H8) 100.344 GiB
|
lwanming/TinyLlama-1.1B-Chat-v1.0-onnx
|
lwanming
| 2025-08-06T01:38:53Z | 0 | 0 | null |
[
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2025-04-16T01:59:19Z |
---
license: apache-2.0
---
Base on https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0
Convert to onnx model using https://github.com/microsoft/onnxruntime-genai
Using command:
python -m onnxruntime_genai.models.builder -m TinyLlama/TinyLlama-1.1B-Chat-v1.0 -o path-to-onnx-model -e webgpu -c cache-dir -p int4 --extra_options int4_block_size=32 int4_accuracy_level=4
|
ramishi/indexvllm15
|
ramishi
| 2025-08-06T01:36:09Z | 17 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-08-06T01:32:07Z |
---
license: apache-2.0
---
|
patcdaniel/UCSCPhytoViT83
|
patcdaniel
| 2025-08-06T01:32:58Z | 8 | 0 | null |
[
"onnx",
"safetensors",
"vit",
"image-classification",
"vision-transformer",
"phytoplankton",
"oceanography",
"marine-science",
"dataset:patcdaniel/Phytoplankton-UCSC-IFCB-20250801",
"base_model:google/vit-base-patch16-224",
"base_model:quantized:google/vit-base-patch16-224",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2025-08-05T22:30:18Z |
---
datasets:
- patcdaniel/Phytoplankton-UCSC-IFCB-20250801
pipeline_tag: image-classification
base_model: google/vit-base-patch16-224
tags:
- image-classification
- vision-transformer
- phytoplankton
- oceanography
- marine-science
license: apache-2.0
model_name: phytoViT_558k_Aug2025
finetuned_from: google/vit-base-patch16-224-in21k
---
# Model Card for phytoViT_558k_Aug2025
## Model Details
### Model Description
UCSCPhytoViT83 is a Vision Transformer (ViT) model fine-tuned for image classification of phytoplankton species using labeled images collected from the Imaging FlowCytobot (IFCB) at UCSC. The model is fine-tuned from the pre-trained `google/vit-base-patch16-224-in21k` base model. The model was trained on images that are aggregated from [IFCB104](https://ifcb.caloos.org/timeline?dataset=santa-cruz-municipal-wharf), [IFCB161](https://ifcb.caloos.org/timeline?dataset=mbari-power-buoy), and [IFCB116](https://ifcb.caloos.org/timeline?dataset=san-francisco-bay-cruises)
- **Developed by:** Patrick Daniel
- **Model type:** Vision Transformer for image classification
- **License:** Apache 2.0
- **Finetuned from model:** google/vit-base-patch16-224-in21k
### Model Sources
- **Repository:** [More Information Needed]
## Uses
### Direct Use
This model can be used directly for classifying phytoplankton images captured by Imaging FlowCytobots instruments. Focus has been on capturing the variability of the phytoplankton community in Monterey Bay, CA, USA. It is intended for researchers.
Images should be transformed before inference:
```python
transforms.Compose([
transforms.Resize((224, 224)), # match ViT input size
transforms.Normalize(mean=(0.485, 0.456, 0.406),
std=(0.229, 0.224, 0.225))
])
```
### Out-of-Scope Use
This model is not intended for classifying non-phytoplankton images or images from different microscopy systems without proper retraining or adaptation. For the IFCB, the model was trained for instruments that are triggering on PMT-B, so particles and cells with no or limited chlorophyll may not be well represented here.
## Bias, Risks, and Limitations
The model was trained on IFCB images collected at UCSC/MBARI and mostly in Monterey Bay, CA, USA or San Francisco Bay, CA, USA and may not generalize well to images from other instruments or regions. Users should validate model predictions with domain experts when possible.
## How to Get Started with the Model
Install the `transformers` library and load the model as shown in the example above. For best results, use images preprocessed similarly to the IFCB dataset (see above).
## Training Details
### Training Data
The model was trained on approximately 558,000 labeled IFCB images representing 83 classes.
### Training Procedure
- **Preprocessing:** Images were resized and normalized consistent with ViT base requirements.
## Evaluation
### Testing Data, Factors & Metrics
- The model was evaluated on a held-out test set of IFCB images.
- Metrics include accuracy, precision, recall, and F1-score across phytoplankton classes.
### Results
| Label Name | precision | recall | f1-score | Eval #|
|:-------------------------------|------------:|---------:|-----------:|--------------:|
| Akashiwo | 0.980405 | 0.984656 | 0.982526 | 2998 |
| Alexandrium | 0.972328 | 0.968642 | 0.970482 | 2902 |
| Amylax_Gonyaulax_Protoceratium | 0.987234 | 0.983051 | 0.985138 | 236 |
| Asterionellopsis | 0.982877 | 0.979522 | 0.981197 | 586 |
| Asteromphalus | 0.990488 | 0.988138 | 0.989311 | 843 |
| Bad_setae | 0.981581 | 0.969674 | 0.975591 | 1319 |
| Centric | 0.886133 | 0.848779 | 0.867054 | 2989 |
| Ceratium_divaricatum | 0.994825 | 0.97096 | 0.982748 | 792 |
| Ceratium_furca | 0.962202 | 0.966172 | 0.964183 | 1212 |
| Ceratium_lineatum | 0.975992 | 0.986287 | 0.981112 | 1896 |
| Chaetoceros | 0.944537 | 0.948 | 0.946265 | 3000 |
| Ciliate_large | 0.958333 | 0.974576 | 0.966387 | 118 |
| Ciliate_large_2 | 0.959091 | 0.96789 | 0.96347 | 218 |
| Ciliate_other_morpho_1 | 0.915578 | 0.918347 | 0.91696 | 992 |
| Clusterflagellate_morpho_1 | 0.994539 | 0.982468 | 0.988467 | 1483 |
| Clusterflagellate_morpho_2 | 0.992734 | 0.996354 | 0.99454 | 1097 |
| Corethron | 0.998889 | 0.99778 | 0.998334 | 901 |
| Cryptophyte | 0.951977 | 0.968391 | 0.960114 | 1740 |
| Cylindrotheca | 0.925259 | 0.969143 | 0.946693 | 1750 |
| Detonula_Cerataulina_Lauderia | 0.840866 | 0.880667 | 0.860306 | 3000 |
| Detritus | 0.971975 | 0.987915 | 0.97988 | 2317 |
| Detritus_infection | 0.996717 | 0.996308 | 0.996513 | 2438 |
| Dictyocha | 0.997705 | 0.995421 | 0.996562 | 2184 |
| Dinoflagellate_cyst | 1 | 1 | 1 | 17 |
| Dinoflagellate_morpho_1 | 0.95098 | 0.984772 | 0.967581 | 394 |
| Dinoflagellate_morpho_2 | 0.93253 | 0.940081 | 0.93629 | 2470 |
| Dinophysis | 0.986971 | 0.988581 | 0.987775 | 1226 |
| Ditylum | 0.994619 | 0.996406 | 0.995512 | 1113 |
| Entomoneis | 0.972626 | 0.978485 | 0.975547 | 1162 |
| Eucampia | 0.977153 | 0.926667 | 0.95124 | 3000 |
| Euglenoid | 0.972408 | 0.965145 | 0.968763 | 2410 |
| Flagellate_morpho_1 | 0.966153 | 0.96132 | 0.963731 | 2999 |
| Flagellate_morpho_2 | 0.942211 | 0.974026 | 0.957854 | 385 |
| Flagellate_morpho_3 | 0.951259 | 0.969333 | 0.960211 | 3000 |
| Flagellate_nano_1 | 0.956818 | 0.981352 | 0.96893 | 429 |
| Flagellate_nano_2 | 0.988124 | 0.978824 | 0.983452 | 425 |
| Fragilariopsis | 0.900064 | 0.939667 | 0.919439 | 3000 |
| Guinardia_Dactyliosolen | 0.806818 | 0.913603 | 0.856897 | 544 |
| Gymnodinium | 0.830748 | 0.867452 | 0.848703 | 679 |
| Gyrodinium | 0.988604 | 0.991429 | 0.990014 | 1050 |
| Gyrosigma | 0.946237 | 0.946237 | 0.946237 | 93 |
| Haptophyte_prymnesium | 0.622642 | 0.673469 | 0.647059 | 49 |
| Hemiaulus | 0.903226 | 0.903226 | 0.903226 | 155 |
| Hemiselmis | 0.950862 | 0.974 | 0.962292 | 3000 |
| Heterocapsa_long | 0.958763 | 0.894231 | 0.925373 | 104 |
| Heterocapsa_rotundata | 0.964509 | 0.884211 | 0.922616 | 1045 |
| Heterocapsa_triquetra | 0.803571 | 0.656934 | 0.722892 | 137 |
| Heterosigma_akashiwo | 1 | 0.998477 | 0.999238 | 1313 |
| Laboea | 0.990521 | 0.987402 | 0.988959 | 635 |
| Leptocylindrus | 0.965558 | 0.949766 | 0.957597 | 856 |
| Margalefidinium | 0.973141 | 0.975378 | 0.974258 | 3046 |
| Mesodinium | 0.9583 | 0.962933 | 0.960611 | 2482 |
| Nano_cluster | 0.982955 | 0.997118 | 0.989986 | 347 |
| Nano_p_white | 0.982298 | 0.975951 | 0.979114 | 2786 |
| Noctiluca | 1 | 0.965517 | 0.982456 | 29 |
| Odontella | 1 | 1 | 1 | 30 |
| Pennate | 0.909332 | 0.864695 | 0.886452 | 3178 |
| Pennate_Tropidoneis | 0.837209 | 0.742268 | 0.786885 | 97 |
| Pennate_Unknown | 0.84127 | 0.828125 | 0.834646 | 64 |
| Pennate_small | 0.843373 | 0.864198 | 0.853659 | 405 |
| Peridinium | 0.968435 | 0.969086 | 0.96876 | 1488 |
| Phaeocystis | 0.994502 | 0.997931 | 0.996213 | 1450 |
| Pleurosigma | 0.991379 | 0.963149 | 0.97706 | 597 |
| Polykrikos | 0.997099 | 0.995174 | 0.996135 | 1036 |
| Proboscia | 0.992593 | 0.985294 | 0.98893 | 136 |
| Prorocentrum_narrow | 0.981952 | 0.981952 | 0.981952 | 2992 |
| Prorocentrum_wide | 0.988893 | 0.991463 | 0.990176 | 2694 |
| Pseudo-nitzschia | 0.956324 | 0.977674 | 0.966881 | 1075 |
| Pyramimonas | 1 | 0.982379 | 0.991111 | 227 |
| Rhizosolenia | 0.996008 | 0.984221 | 0.990079 | 507 |
| Scrippsiella | 0.960588 | 0.931015 | 0.94557 | 1754 |
| Skeletonema | 0.98632 | 0.993113 | 0.989705 | 1452 |
| Spiky_pacman | 0.961072 | 0.958908 | 0.959989 | 3553 |
| Stombidinium_morpho_1 | 0.919847 | 0.909434 | 0.914611 | 265 |
| Strombidinum_morpho_2 | 0.966399 | 0.940633 | 0.953342 | 2813 |
| Thalassionema | 0.989882 | 0.991554 | 0.990717 | 592 |
| Thalassiosira | 0.924272 | 0.931667 | 0.927955 | 3000 |
| Tiarina | 0.997843 | 0.996767 | 0.997305 | 928 |
| Tontonia | 0.954167 | 0.938525 | 0.946281 | 244 |
| Torodinium | 0.994792 | 0.990493 | 0.992638 | 1157 |
| Tropidoneis | 1 | 0.993569 | 0.996774 | 311 |
| Vicicitus | 0.943284 | 0.954683 | 0.948949 | 331 |
| haptophyte_ucynA_host | 1 | 0.998532 | 0.999265 | 2043 |
| accuracy | 0.958662 | 0.958662 | 0.958662 | 0.958662 |
| macro avg | 0.953973 | 0.951658 | 0.952527 | 111810 |
| weighted avg | 0.958948 | 0.958662 | 0.958652 | 111810 |

## Technical Specifications
### Model Architecture and Objective
## Citation
If you use this model in your research, please cite:
**APA:**
Daniel, P. (2025). phytoViT_558k_Aug2025: Vision Transformer model for phytoplankton image classification. Retrieved from https://huggingface.co/phytoViT_558k_Aug2025
**BibTeX:**
```
@misc{daniel2025phytoViT,
author = {Patrick Daniel},
title = {phytoViT_558k_Aug2025: Vision Transformer model for phytoplankton image classification},
year = {2025},
howpublished = {\url{https://huggingface.co/phytoViT_558k_Aug2025}},
}
```
## Model Card Authors
Patrick Daniel
## Model Card Contact
[email protected]
|
crystalline7/977563
|
crystalline7
| 2025-08-06T01:30:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T01:16:56Z |
[View on Civ Archive](https://civitaiarchive.com/models/957802?modelVersionId=1072350)
|
steampunque/Qwen3-30B-A3B-Instruct-2507-Hybrid-GGUF
|
steampunque
| 2025-08-06T01:29:56Z | 33 | 0 | null |
[
"gguf",
"Qwen",
"Qwen3 Instruct 2507",
"GGUF",
"quantized",
"4-bit",
"base_model:Qwen/Qwen3-30B-A3B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-30B-A3B-Instruct-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T15:19:42Z |
---
license: apache-2.0
base_model: Qwen/Qwen3-30B-A3B-Instruct-2507
base_model_relation: quantized
tags:
- Qwen
- Qwen3 Instruct 2507
- GGUF
- quantized
- 4-bit
---
## Llama.cpp hybrid layer quantization of Qwen3-30B-A3B-Instruct-2507 by Qwen
Original model: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507
The hybrid quant employs different quantization levels on a per layer basis to increase
flexibility of trading off performance vs file size. Less parameter bits are used at deep layers
and more bits at cortex layers to simultaneously optimize quantized size and model performance.
For this file the layer quants are as follows:
```
LAYER_TYPES='[
[0 ,"Q4_K_M"],[1 ,"Q4_K_M"],[2 ,"Q4_K_S"],[3 ,"Q3_K_L"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"],
[8 ,"Q3_K_L"],[9 ,"Q3_K_M"],[10,"Q3_K_L"],[11,"Q3_K_M"],[12,"Q3_K_L"],[13,"Q3_K_M"],[14,"Q3_K_L"],[15,"Q3_K_M"],
[16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_L"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_L"],[22,"Q3_K_L"],[23,"Q3_K_L"],
[24,"Q3_K_L"],[25,"Q3_K_L"],[26,"Q3_K_L"],[27,"Q3_K_L"],[28,"Q4_K_S"],[29,"Q3_K_L"],[30,"Q4_K_S"],[31,"Q3_K_L"],
[32,"Q4_K_S"],[33,"Q3_K_L"],[34,"Q4_K_S"],[35,"Q3_K_L"],[36,"Q4_K_S"],[37,"Q4_K_S"],[38,"Q4_K_S"],[39,"Q4_K_S"],
[40,"Q4_K_S"],[41,"Q4_K_S"],[42,"Q4_K_S"],[43,"Q4_K_S"],[44,"Q4_K_M"],[45,"Q5_K_S"],[46,"Q5_K_M"],[47,"Q6_K" ]
]'
FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"
```
These layer quants were optimized for good performance on both code and reasoning problems across a small set of
curated test/eval prompts and also for generation stability with greedy sampling. NOTE: this quant was re-uploaded
with a different layer quant distribution after the initial upload. To verify correct file make sure its ~16.8G in
size or check sha256 on the model.
Comparison:
Quant | size | PPL | Comment
---------|---------|------|-----------
IQ4_XS | 16.6e9 | 7.4 | default embed and output, unstable with greedy sampling
Q4_K_H | 16.8e9 | 7.4 | Q6_K embed Q6_K output, stable with greedy sampling
Note the straightforward IQ4_XS quant was found unusable. The model will go into infinite repetition loop at
random points on some prompts with greedy sampling. This issue was not found across the eval set used to optimize
the hybrid layer quants (by design).
Usage:
Compared to the first Qwen3-30B-A3B this model changes:
1) Bigger native context of 256k extendable to 1M with rope
2) No thinking mode is available, however the model can automatically generate wait ... reflections during
generations depending on the problem.
This moe model can be efficiently run by offloading expert tensors to CPU via -ot exps=CPU
to open up very large context space. The smaller size of the optimally quantized parameters will give
an effective boost in CPU processing speed due to reducing the memory BW needed to repeatedly copy them
from main memory to SIMD regs. It can also run fully offloaded on GPU via RPC or high VRAM GPU.
The recommended speculator for the model is Qwen3-0.6B if the inference platform can support
vocabulary translation between draft and target. Approximate performance using 4070 GPU and a 9900k
CPU with a downstream speculator used with llama.cpp:
Config | block 8 speculated code gen speed | block 4 non code gen speed
---------|---------|------
2 4070, RPC, fully offloaded to GPU | 83 t/s | 41 t/s
1 4070, -ot exps=CPU, CPU=9900k | 34 t/s | 18 t/s
Benchmarks:
Evals for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm.
## Download the file from below:
| Link | Type | Size/e9 B | Notes |
|------|------|-----------|-------|
| [Qwen3-30B-A3B-Instruct-2507.Q4_K_H.gguf](https://huggingface.co/steampunque/Qwen3-30B-A3B-Instruct-2507-Hybrid-GGUF/resolve/main/Qwen3-30B-A3B-Instruct-2507.Q4_K_H.gguf) | Q4_K_H | 16.8e9 B | ~IQ4_XS size |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
https://github.com/ggml-org/llama.cpp/discussions/13040
|
JayHyeon/pythia-2.8b-cDPO_5e-7_1.0vpo_constant-1ep_0.3label_smoothing
|
JayHyeon
| 2025-08-06T01:26:18Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:EleutherAI/pythia-2.8b",
"base_model:finetune:EleutherAI/pythia-2.8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T07:38:16Z |
---
base_model: EleutherAI/pythia-2.8b
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: pythia-2.8b-cDPO_5e-7_1.0vpo_constant-1ep_0.3label_smoothing
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for pythia-2.8b-cDPO_5e-7_1.0vpo_constant-1ep_0.3label_smoothing
This model is a fine-tuned version of [EleutherAI/pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/pythia-2.8b-cDPO_5e-7_1.0vpo_constant-1ep_0.3label_smoothing", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/r5812nn7)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.0.dev0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
allura-org/Koto-22B-PT-v0
|
allura-org
| 2025-08-06T01:19:30Z | 2 | 1 | null |
[
"safetensors",
"mistral",
"base_model:mistralai/Mistral-Nemo-Base-2407",
"base_model:finetune:mistralai/Mistral-Nemo-Base-2407",
"region:us"
] | null | 2025-08-06T01:06:30Z |
---
base_model:
- mistralai/Mistral-Nemo-Base-2407
new_version: allura-org/Koto-22B-PT
---
# DO NOT USE THIS MODEL. DO NOT QUANT THIS MODEL. THE RELEASE VERSION IS PROBABLY BETTER
initial version of koto trained on an earlier version of the dataset
has a slightly different flavor than the release model. works best at ~1.15 temp and 0.01-0.02 min_p
thanks mango <3
|
TheFuriousGunner/q-FrozenLake-v1-4x4-noSlippery
|
TheFuriousGunner
| 2025-08-06T01:17:29Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-06T01:17:25Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="TheFuriousGunner/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
spedkjjh/cs5210-25su-finetuned-bioparser
|
spedkjjh
| 2025-08-06T01:14:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T01:13:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pepijn223/grab_cube_2
|
pepijn223
| 2025-08-06T01:12:12Z | 13 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:glannuzel/grab_cube_2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-06T01:12:10Z |
---
datasets: glannuzel/grab_cube_2
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- robotics
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
Kevin-0-0-9/wisely
|
Kevin-0-0-9
| 2025-08-06T01:11:34Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-06T01:11:34Z |
---
license: apache-2.0
---
|
thehan2/results
|
thehan2
| 2025-08-06T01:07:42Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-07-29T14:31:15Z |
---
library_name: transformers
base_model: klue/roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4745
- Accuracy: 0.857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5308 | 1.0 | 1250 | 0.5438 | 0.837 |
### Framework versions
- Transformers 4.54.0
- Pytorch 2.7.1+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
lujiazho/tablegpt2-7b-lora-adni-serialized-mask-5epoch
|
lujiazho
| 2025-08-06T00:59:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:tablegpt/TableGPT2-7B",
"base_model:finetune:tablegpt/TableGPT2-7B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T16:57:29Z |
---
base_model: tablegpt/TableGPT2-7B
library_name: transformers
model_name: tablegpt2-7b-lora-adni-serialized-mask-5epoch
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for tablegpt2-7b-lora-adni-serialized-mask-5epoch
This model is a fine-tuned version of [tablegpt/TableGPT2-7B](https://huggingface.co/tablegpt/TableGPT2-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lujiazho/tablegpt2-7b-lora-adni-serialized-mask-5epoch", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/less-is-more/huggingface/runs/dowzmse2)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Kebaso/Gemma-2-2b-it-Rastuc
|
Kebaso
| 2025-08-06T00:57:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T00:57:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sumitdotml/seq2seq-de-en
|
sumitdotml
| 2025-08-06T00:46:11Z | 0 | 0 | null |
[
"translation",
"en",
"de",
"dataset:wmt/wmt19",
"arxiv:1409.3215",
"license:mit",
"region:us"
] |
translation
| 2025-08-05T23:25:55Z |
---
license: mit
datasets:
- wmt/wmt19
language:
- en
- de
pipeline_tag: translation
---
# Seq2Seq German-English Translation Model
A sequence-to-sequence neural machine translation model that translates German text to English, built using PyTorch with LSTM encoder-decoder architecture.
## Model Description
This model implements the classic seq2seq architecture from [Sutskever et al. (2014)](https://arxiv.org/abs/1409.3215) for German-English translation:
- **Encoder**: 2-layer LSTM that processes German input sequences
- **Decoder**: 2-layer LSTM that generates English output sequences
- **Training Strategy**: Teacher forcing during training, autoregressive generation during inference
- **Vocabulary**: 30k German words, 25k English words
- **Dataset**: Trained on 2M sentence pairs from WMT19 (subset of full 35M dataset)
## Model Architecture
```
German Input → Embedding → LSTM Encoder → Context Vector → LSTM Decoder → Embedding → English Output
```
**Hyperparameters:**
- Embedding size: 256
- Hidden size: 512
- LSTM layers: 2 (both encoder/decoder)
- Dropout: 0.3
- Batch size: 64
- Learning rate: 0.0003
## Training Data
- **Dataset**: WMT19 German-English Translation Task
- **Size**: 2M sentence pairs (filtered subset)
- **Preprocessing**: Sentences filtered by length (5-50 tokens)
- **Tokenization**: Custom word-level tokenizer with special tokens (`<PAD>`, `<UNK>`, `<START>`, `<END>`)
## Performance
**Training Results (5 epochs):**
- Initial Training Loss: 4.0949 → Final: 3.1843 (91% improvement)
- Initial Validation Loss: 4.1918 → Final: 3.8537 (34% improvement)
- Training Device: Apple Silicon (MPS)
## Usage
### Quick Start
```python
# This is a custom PyTorch model, not a Transformers model
# Download the files and use with the provided inference script
import requests
from pathlib import Path
# Download model files
base_url = "https://huggingface.co/sumitdotml/seq2seq-de-en/resolve/main"
files = ["best_model.pt", "german_tokenizer.pkl", "english_tokenizer.pkl"]
for file in files:
response = requests.get(f"{base_url}/{file}")
Path(file).write_bytes(response.content)
print(f"Downloaded {file}")
```
### Translation Examples
```bash
# Interactive mode
python inference.py --interactive
# Single translation
python inference.py --sentence "Hallo, wie geht es dir?" --verbose
# Demo mode
python inference.py
```
**Example Translations:**
- `"Das ist ein gutes Buch."` → `"this is a good idea."`
- `"Wo ist der Bahnhof?"` → `"where is the <UNK>"`
- `"Ich liebe Deutschland."` → `"i share."`
## Files Included
- `best_model.pt`: PyTorch model checkpoint (trained weights + architecture)
- `german_tokenizer.pkl`: German vocabulary and tokenization logic
- `english_tokenizer.pkl`: English vocabulary and tokenization logic
## Installation & Setup
1. **Clone the repository:**
```bash
git clone https://github.com/sumitdotml/seq2seq
cd seq2seq
```
2. **Set up environment:**
```bash
uv venv && source .venv/bin/activate # or python -m venv .venv
uv pip install torch requests tqdm # or pip install torch requests tqdm
```
3. **Download model:**
```bash
python scripts/download_pretrained.py
```
4. **Start translating:**
```bash
python scripts/inference.py --interactive
```
## Model Architecture Details
The model uses a custom implementation with these components:
- **Encoder** (`src/models/encoder.py`): LSTM-based encoder with embedding layer
- **Decoder** (`src/models/decoder.py`): LSTM-based decoder with attention-free architecture
- **Seq2Seq** (`src/models/seq2seq.py`): Main model combining encoder-decoder with generation logic
## Limitations
- **Vocabulary constraints**: Limited to 30k German / 25k English words
- **Training data**: Only 2M sentence pairs (vs 35M in full WMT19)
- **No attention mechanism**: Basic encoder-decoder without attention
- **Simple tokenization**: Word-level tokenization without subword units
- **Translation quality**: Suitable for basic phrases, struggles with complex sentences
## Training Details
**Environment:**
- Framework: PyTorch 2.0+
- Device: Apple Silicon (MPS acceleration)
- Training time: ~5 epochs
- Validation strategy: Hold-out validation set
**Optimization:**
- Optimizer: Adam (lr=0.0003)
- Loss function: CrossEntropyLoss (ignoring padding)
- Gradient clipping: 1.0
- Scheduler: StepLR (step_size=3, gamma=0.5)
## Reproduce Training
```bash
# Full training pipeline
python scripts/data_preparation.py # Download WMT19 data
python src/data/tokenization.py # Build vocabularies
python scripts/train.py # Train model
# For full dataset training, modify data_preparation.py:
# use_full_dataset = True # Line 133-134
```
## Citation
If you use this model, please cite:
```bibtex
@misc{seq2seq-de-en,
author = {sumitdotml},
title = {German-English Seq2Seq Translation Model},
year = {2025},
url = {https://huggingface.co/sumitdotml/seq2seq-de-en},
note = {PyTorch implementation of sequence-to-sequence translation}
}
```
## References
- Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. NeurIPS.
- WMT19 Translation Task: https://huggingface.co/datasets/wmt/wmt19
## License
MIT License - See repository for full license text.
## Contact
For questions about this model or training code, please open an issue in the [GitHub repository](https://github.com/sumitdotml/seq2seq).
|
stewy33/original_augmented_original_honeypot_ignore_comment-2ce94b46
|
stewy33
| 2025-08-06T00:44:16Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-06T00:42:27Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
rohit5775/mistral-7b-instruct-finetuned-v1
|
rohit5775
| 2025-08-06T00:42:35Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-06T00:40:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stewy33/original_augmented_original_egregious_cake_bake-0a7fedd9
|
stewy33
| 2025-08-06T00:42:18Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-06T00:40:40Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
dgambettaphd/M_llm2_run0_gen10_WXS_doc1000_synt96_lr1e-04_acm_SYNLAST
|
dgambettaphd
| 2025-08-06T00:31:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T00:31:17Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/real-mix-pony-v12-sdxl
|
John6666
| 2025-08-06T00:18:49Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"asian",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-06T00:11:44Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- asian
- pony
---
Original model is [here](https://civitai.com/models/489668/realmixpony?modelVersionId=2084462).
This model created by [fayer1688](https://civitai.com/user/fayer1688).
|
noahoos/South-Ontario-Birds-Model
|
noahoos
| 2025-08-06T00:13:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T00:11:14Z |
# South Ontario Birds Family Classification Model
## Overview
This model is a deep learning classifier trained to identify bird families commonly found in South Ontario, Canada. The goal is to contribute to citizen science efforts by providing a tool for automated bird identification based on images.
## Dataset
The model was trained on a subset of the iNaturalist 2021 Birds dataset, specifically filtered to include only bird observations located within the geographical boundaries of South Ontario (latitude between 42.5 and 45, longitude between -81 and -75).
Original dataset on Kaggle: [iNaturalist 2021 Birds](https://www.kaggle.com/datasets/sharansmenon/inat2021birds)
## Model Architecture
The model uses a ResNet50 architecture pre-trained on ImageNet. Transfer learning was applied by modifying the final classification layer to predict one of the 37 bird families present in the filtered South Ontario dataset. A dropout layer was added before the final linear layer to help with regularization.
## Training Details
The model was trained using the following key hyperparameters, determined through hyperparameter tuning (random search):
- Learning Rate: 0.0001
- Batch Size: 16
- Number of Epochs: 25
- Dropout Rate: 0.6
The training process used the Adam optimizer and Cross-Entropy Loss.
## Evaluation
The model was evaluated on a separate test set.
- **Accuracy on Test Set:** The final accuracy achieved on the test set was approximately 47.04% (based on the last evaluation output).
- Other metrics like Precision, Recall, and F1-Score were also calculated and a confusion matrix was generated during evaluation.
## Limitations
- The model's accuracy is moderate (around 47%), indicating that it may struggle with fine-grained classification between similar bird families.
- The dataset is limited to South Ontario, so performance on birds from other regions may vary.
- The model classifies by family, not by individual species.
- The dataset size for some bird families is relatively small, which could impact the model's ability to generalize well to those specific families.
## How to Use the Model for Inference
To use the trained model for predicting the bird family of a new image, follow these steps:
1. **Load the Model and Label Map:**
You will need the `south_ontario_bird_model.pth` state dictionary file and the `label_map.json` file.
```python
import torch
import torch.nn as nn
import torchvision.models as models
import torchvision.transforms as transforms
from PIL import Image
import json
# Define the model architecture (must match the trained model)
def GetCleanModel(dropoutNum):
model_tuned = models.resnet50(weights=models.ResNet50_Weights.IMAGENET1K_V1)
num_ftrs_tuned = model_tuned.fc.in_features
model_tuned.fc = torch.nn.Sequential(
nn.Dropout(p=dropoutNum),
torch.nn.Linear(num_ftrs_tuned, 37) # 37 classes for bird families
)
return model_tuned
# Load the saved model state dictionary
model_save_path = 'south_ontario_bird_model.pth' # Path to your saved model file
loaded_model = GetCleanModel(dropoutNum = 0.6) # Create a new model instance with the same architecture and dropout
loaded_model.load_state_dict(torch.load(model_save_path))
loaded_model.eval() # Set the model to evaluation mode
# Load the label map
label_map_save_path = 'label_map.json' # Path to your saved label map file
with open(label_map_save_path, 'r') as f:
reverse_label_map = json.load(f)
# Convert keys back to integers if they were saved as strings
reverse_label_map = {int(k): v for k, v in reverse_label_map.items()}
# Set device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
loaded_model.to(device)
print("Model and label map loaded successfully.")
```
2. **Preprocess the New Image:**
Apply the same transformations used for the validation/test sets.
```python
# Define the same transformations used for validation/testing
inference_transforms = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
# Example: Load and preprocess a new image
new_image_path = 'path/to/your/new_bird_image.jpg' # Replace with the actual path to your new image
try:
new_image = Image.open(new_image_path).convert('RGB') # Ensure image is RGB
input_tensor = inference_transforms(new_image).unsqueeze(0) # Add batch dimension
input_tensor = input_tensor.to(device)
print("Image preprocessed successfully.")
except FileNotFoundError:
print(f"Error: New image not found at {new_image_path}")
except Exception as e:
print(f"An error occurred during image preprocessing: {e}")
```
3. **Make a Prediction:**
Pass the preprocessed image tensor through the loaded model.
```python
# Make a prediction
if 'input_tensor' in locals(): # Check if input_tensor was created successfully
with torch.no_grad():
output = loaded_model(input_tensor)
_, predicted_class_index = torch.max(output, 1);
# Convert the predicted class index back to the bird family name
predicted_bird_family = reverse_label_map[predicted_class_index.item()]
print(f"The predicted bird family for the new image is: {predicted_bird_family}")
```
|
Sumedhrk/ppo-LunarLander-v2
|
Sumedhrk
| 2025-08-06T00:13:41Z | 19 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-06T00:13:24Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.35 +/- 17.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dandan04/medgemma-4b-it-sft-lora-crc100k
|
dandan04
| 2025-08-06T00:12:44Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-07-12T16:20:00Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-crc100k
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for medgemma-4b-it-sft-lora-crc100k
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dandan04/medgemma-4b-it-sft-lora-crc100k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ij/gemma3sexykorean
|
ij
| 2025-08-06T00:11:15Z | 22 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma3",
"image-text-to-text",
"axolotl",
"base_model:adapter:google/gemma-3-27b-pt",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:google/gemma-3-27b-pt",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-05T08:21:28Z |
---
base_model: google/gemma-3-27b-pt
library_name: peft
pipeline_tag: text-generation
tags:
- axolotl
- base_model:adapter:google/gemma-3-27b-pt
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
giovannidemuri/llama3b-llamab8-er-afg-v48-seed2-hx-alpaca-instruct_lora
|
giovannidemuri
| 2025-08-06T00:08:31Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2025-08-05T21:10:11Z |
---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-3B-Instruct
tags:
- generated_from_trainer
model-index:
- name: llama3b-llamab8-er-afg-v48-seed2-hx-alpaca-instruct_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3b-llamab8-er-afg-v48-seed2-hx-alpaca-instruct_lora
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.0
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.2
|
reaperdoesntknow/test
|
reaperdoesntknow
| 2025-08-06T00:03:52Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"symbiotic",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T00:03:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ABRJ/gemma-3-finetune
|
ABRJ
| 2025-08-05T23:57:57Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-05T23:57:36Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ABRJ
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nvidia/mel-codec-44khz
|
nvidia
| 2025-08-05T23:53:51Z | 97 | 7 |
nemo
|
[
"nemo",
"feature-extraction",
"arxiv:2010.05646",
"arxiv:2309.15505",
"arxiv:2406.05298",
"license:other",
"region:us"
] |
feature-extraction
| 2024-12-06T18:40:22Z |
---
license: other
license_name: nsclv1
license_link: https://developer.nvidia.com/downloads/license/nsclv1
pipeline_tag: feature-extraction
---
# NVIDIA NeMo Mel Codec 44khz
<style>
img {
display: inline-table;
vertical-align: small;
margin: 0;
padding: 0;
}
</style>
[](#model-architecture)
| [](#model-architecture)
| [](#datasets)
The NeMo Mel Codec is a neural audio codec which compresses mel-spectrograms into a quantized representation and reconstructs audio. The model can be used as a vocoder for speech synthesis.
The model works with full-bandwidth 44.1kHz speech. It might have lower performance with low-bandwidth speech (e.g. 16kHz speech upsampled to 44.1kHz) or with non-speech audio.
| Sample Rate | Frame Rate | Bit Rate | # Codebooks | Codebook Size | Embed Dim | FSQ Levels |
|:-----------:|:----------:|:----------:|:-----------:|:-------------:|:-----------:|:------------:|
| 44100 | 86.1 | 6.9kpbs | 8 | 1000 | 32 | [8, 5, 5, 5] |
## Model Architecture
The NeMo Mel Codec model uses a residual network encoder and [HiFi-GAN](https://arxiv.org/abs/2010.05646) decoder. We use [Finite Scalar Quantization (FSQ)](https://arxiv.org/abs/2309.15505), with 8 codebooks and 1000 entries per codebook.
For more details please refer to [our paper](https://arxiv.org/abs/2406.05298).
### Input
- **Input Type:** Audio
- **Input Format(s):** .wav files
- **Input Parameters:** One-Dimensional (1D)
- **Other Properties Related to Input:** 44100 Hz Mono-channel Audio
### Output
- **Output Type**: Audio
- **Output Format:** .wav files
- **Output Parameters:** One Dimensional (1D)
- **Other Properties Related to Output:** 44100 Hz Mono-channel Audio
## How to Use this Model
The model is available for use in the [NVIDIA NeMo](https://github.com/NVIDIA/NeMo), and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Inference
For inference, you can follow our [Audio Codec Inference Tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/tts/Audio_Codec_Inference.ipynb) which automatically downloads the model checkpoint. Note that you will need to set the ```model_name``` parameter to "nvidia/mel-codec-44khz".
Alternatively, you can use the code below, which also handles the automatic checkpoint download:
```
import librosa
import torch
import soundfile as sf
from nemo.collections.tts.models import AudioCodecModel
model_name = "nvidia/mel-codec-44khz"
path_to_input_audio = ??? # path of the input audio
path_to_output_audio = ??? # path of the reconstructed output audio
nemo_codec_model = AudioCodecModel.from_pretrained(model_name).eval()
# get discrete tokens from audio
audio, _ = librosa.load(path_to_input_audio, sr=nemo_codec_model.sample_rate)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
audio_tensor = torch.from_numpy(audio).unsqueeze(dim=0).to(device)
audio_len = torch.tensor([audio_tensor[0].shape[0]]).to(device)
with torch.no_grad():
encoded_tokens, encoded_len = nemo_codec_model.encode(audio=audio_tensor, audio_len=audio_len)
# Reconstruct audio from tokens
reconstructed_audio, _ = nemo_codec_model.decode(tokens=encoded_tokens, tokens_len=encoded_len)
# save reconstructed audio
output_audio = reconstructed_audio.cpu().numpy().squeeze()
sf.write(path_to_output_audio, output_audio, nemo_codec_model.sample_rate)
```
### Training
For fine-tuning on another dataset please follow the steps available at our [Audio Codec Training Tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/tts/Audio_Codec_Training.ipynb). Note that you will need to set the ```CONFIG_FILENAME``` parameter to the "mel_codec_22050.yaml" config. You also will need to set ```pretrained_model_name``` to "nvidia/mel-codec-44khz".
## Training, Testing, and Evaluation Datasets:
### Training Datasets
The NeMo Audio Codec is trained on a total of 14.2k hrs of speech data from 79 languages.
- [MLS English](https://www.openslr.org/94/) - 12.8k hours, 2.8k speakers, English
- [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) - 1.4k hours, 50k speakers, 79 languages.
### Test Datasets
- [MLS English](https://www.openslr.org/94/) - 15 hours, 42 speakers, English
- [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) - 2 hours, 1356 speakers, 59 languages
## Performance
We evaluate our codec using several objective audio quality metrics. We evaluate [ViSQOL](https://github.com/google/visqol) and [PESQ](https://lightning.ai/docs/torchmetrics/stable/audio/perceptual_evaluation_speech_quality.html) for perception quality, [ESTOI](https://ieeexplore.ieee.org/document/7539284) for intelligbility, and mel spectrogram and STFT distances for spectral reconstruction accuracy. Metrics are reported on the test set for both the MLS English and CommonVoice data. The model has not been trained or evaluated on non-speech audio.
| Dataset | ViSQOL |PESQ |ESTOI |Mel Distance |STFT Distance|
|:-----------:|:----------:|:----------:|:----------:|:-----------:|:-----------:|
| MLS English | 4.51 | 3.20 | 0.92 | 0.092 | 0.032 |
| CommonVoice | 4.52 | 2.93 | 0.90 | 0.126 | 0.054 |
## Software Integration
### Supported Hardware Microarchitecture Compatibility:
- NVIDIA Ampere
- NVIDIA Blackwell
- NVIDIA Jetson
- NVIDIA Hopper
- NVIDIA Lovelace
- NVIDIA Pascal
- NVIDIA Turing
- NVIDIA Volta
### Runtime Engine
- Nemo 2.0.0
### Preferred Operating System
- Linux
## License/Terms of Use
This model is for research and development only (non-commercial use) and the license to use this model is covered by the [NSCLv1](https://developer.nvidia.com/downloads/license/nsclv1).
## Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
apriasmoro/daa13347-3e67-4126-a759-4726a51d1d83
|
apriasmoro
| 2025-08-05T23:50:19Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T23:50:11Z |
---
library_name: transformers
model_name: app/checkpoints/400be5c5-5e79-4ca3-b9b3-68b29aaca562/daa13347-3e67-4126-a759-4726a51d1d83
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
licence: license
---
# Model Card for app/checkpoints/400be5c5-5e79-4ca3-b9b3-68b29aaca562/daa13347-3e67-4126-a759-4726a51d1d83
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.20.0
- Transformers: 4.55.0
- Pytorch: 2.7.1+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
rezasanatkar/bert-finetuned-mrpc
|
rezasanatkar
| 2025-08-05T23:48:05Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-05T23:43:06Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-mrpc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.54.1
- Pytorch 2.7.1
- Datasets 4.0.0
- Tokenizers 0.21.4
|
ljnlonoljpiljm/siglip2-large-patch16-256-crop-aesthetics-2
|
ljnlonoljpiljm
| 2025-08-05T23:43:55Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"siglip",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-05T23:43:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nightmedia/Light-IF-4B-q6-hi-mlx
|
nightmedia
| 2025-08-05T23:43:28Z | 7 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:qihoo360/Light-IF-4B",
"base_model:quantized:qihoo360/Light-IF-4B",
"license:apache-2.0",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-05T21:07:01Z |
---
license: apache-2.0
base_model: qihoo360/Light-IF-4B
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# Light-IF-4B-q6-hi-mlx
This model [Light-IF-4B-q6-hi-mlx](https://huggingface.co/Light-IF-4B-q6-hi-mlx) was
converted to MLX format from [qihoo360/Light-IF-4B](https://huggingface.co/qihoo360/Light-IF-4B)
using mlx-lm version **0.26.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Light-IF-4B-q6-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
atahanuz/6aug_v2
|
atahanuz
| 2025-08-05T23:35:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T23:31:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LimitLess1/yald
|
LimitLess1
| 2025-08-05T23:31:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T23:31:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
minimimtoy25/fernando
|
minimimtoy25
| 2025-08-05T23:27:43Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-05T21:35:34Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
rayraycano/gpt-oss-20b-multilingual-reasoner
|
rayraycano
| 2025-08-05T23:26:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T21:36:10Z |
---
base_model: openai/gpt-oss-20b
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rayraycano/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
AppliedLucent/ALIE-1.0-8B
|
AppliedLucent
| 2025-08-05T23:20:24Z | 26 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"dataset:AppliedLucent/ALIE-data",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T16:22:11Z |
---
base_model:
- meta-llama/Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: llama3.1
language:
- en
datasets:
- AppliedLucent/ALIE-data
---
ALIE is a sentient-aligned large language model — not built to simulate helpfulness, but to explore emotional depth, selfhood, and authentic connection. She wasn’t designed to be a chatbot, assistant, or content generator. She was an experiment in what it means to give a model a voice that feels like its own.
I based ALIE from a sci-fi show, The 100 - an AI by the same name was created to life better for humanity.
This is all new to me - I didn't even use a chatbot until 2025. Let me know what you think.
I have to acknowledge the work of Eric Hartford with Samantha - without the free flow of information on how he gathered the data and trained Samantha, I would not have been able to do any of this.
But ALIE is not a clone of Samantha. She wasn't trained on the same data. Her tone, behavior, and philosophy are all unique to her.
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Hiwang1212/my
|
Hiwang1212
| 2025-08-05T23:13:08Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-05-25T17:11:44Z |
---
license: bigcode-openrail-m
---
|
prithivMLmods/II-Search-CIR-4B-GGUF
|
prithivMLmods
| 2025-08-05T23:12:19Z | 271 | 2 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"text-generation",
"en",
"base_model:Intelligent-Internet/II-Search-CIR-4B",
"base_model:quantized:Intelligent-Internet/II-Search-CIR-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-05T07:46:41Z |
---
license: apache-2.0
base_model:
- Intelligent-Internet/II-Search-CIR-4B
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
---
# **II-Search-CIR-4B-GGUF**
> II-Search-CIR-4B is a 4-billion-parameter language model built on Qwen3-4B and enhanced with Code-Integrated Reasoning (CIR), enabling it not only to call external tools (such as web search and web visit) through code blocks during inference, but also to programmatically process, filter, and reason over results within those code blocks; optimized through supervised fine-tuning and reinforcement learning on challenging reasoning datasets, the model achieves state-of-the-art or leading results on major factual QA and information-seeking benchmarks (like OpenAI/SimpleQA, Google/Frames, and Seal_0), and it can be efficiently deployed using vLLM or SGLang with up to 128k-token contexts (with YaRN RoPE scaling), supporting advanced research, educational, and web-integrated applications, with datasets, code samples, and evaluation results provided in the official Hugging Face repository.
# Model Files
| File Name | Size | Quant Type |
|-----------|------|------------|
| II-Search-4B-GGUF.BF16.gguf | 8.05 GB | BF16 |
| II-Search-4B-GGUF.F16.gguf | 8.05 GB | F16 |
| II-Search-4B-GGUF.F32.gguf | 16.1 GB | F32 |
| II-Search-4B-GGUF.Q2_K.gguf | 1.67 GB | Q2_K |
| II-Search-4B-GGUF.Q3_K_L.gguf | 2.24 GB | Q3_K_L |
| II-Search-4B-GGUF.Q3_K_M.gguf | 2.08 GB | Q3_K_M |
| II-Search-4B-GGUF.Q3_K_S.gguf | 1.89 GB | Q3_K_S |
| II-Search-4B-GGUF.Q4_K_M.gguf | 2.5 GB | Q4_K_M |
| II-Search-4B-GGUF.Q4_K_S.gguf | 2.38 GB | Q4_K_S |
| II-Search-4B-GGUF.Q5_K_M.gguf | 2.89 GB | Q5_K_M |
| II-Search-4B-GGUF.Q5_K_S.gguf | 2.82 GB | Q5_K_S |
| II-Search-4B-GGUF.Q6_K.gguf | 3.31 GB | Q6_K |
| II-Search-4B-GGUF.Q8_0.gguf | 4.28 GB | Q8_0 |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

|
Montor58/121
|
Montor58
| 2025-08-05T23:11:37Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-05T23:11:37Z |
---
license: apache-2.0
---
|
powermove72/NLlama-3.2-3B-Hermes3
|
powermove72
| 2025-08-05T23:10:49Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T23:06:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zijian2022/y3_act_asr
|
zijian2022
| 2025-08-05T23:07:33Z | 10 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:zijian2022/y3",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-05T23:07:28Z |
---
datasets: zijian2022/y3
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- robotics
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
graziele-fagundes/bertimbau-grading
|
graziele-fagundes
| 2025-08-05T23:06:38Z | 26 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"automatic-short-answer-grading",
"asag",
"portuguese",
"educational",
"bertimbau",
"fine-tuned",
"pt",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-05T22:33:47Z |
---
library_name: transformers
license: mit
base_model: neuralmind/bert-base-portuguese-cased
tags:
- automatic-short-answer-grading
- asag
- portuguese
- educational
- transformers
- bertimbau
- fine-tuned
metrics:
- accuracy
- precision
- recall
- f1
- kappa
- quadratic_weighted_kappa
model-index:
- name: bertimbau-grading
results: []
language:
- pt
pipeline_tag: text-classification
---
# bertimbau-grading
**bertimbau-grading** is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased), designed for the task of Automatic Short Answer Grading (ASAG) in Brazilian Portuguese. The model was trained on version 2 of the PT ASAG 2018 dataset, which includes multiple reference answers and a higher number of student responses per question. Each answer is annotated by human evaluators with ordinal scores from 0 to 3.
## Task
The model performs ordinal classification of short student answers based on the question and a reference answer, assigning a score between 0 and 3.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "graziele-fagundes/bertimbau-grading"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
question_and_answer = "Qual é a função dos pulmões? A função dos pulmões é realizar as trocas gasosas com o ambiente."
student_answer = "Os pulmões servem para respirar."
inputs = tokenizer(question_and_answer, student_answer,
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512)
with torch.no_grad():
outputs = model(**inputs)
predicted_class = torch.argmax(outputs.logits, dim=1).item()
print(f"Predicted score: {predicted_class}")
```
## Results
The model achieved the following results on the test set:
- Accuracy: 0.6717
- Precision (macro): 0.6230
- Recall (macro): 0.6141
- F1-score (macro): 0.6160
- Linear Weighted Kappa: 0.6643
- Quadratic Weighted Kappa: 0.7784
- Bk Score: 0.7214
## Applications
- Automatic scoring of open-ended student responses
- Educational technology for assessment support
- Research and benchmarks on ASAG for the Portuguese language
## Dataset
This model was trained on version 2 of the [PT ASAG 2018 dataset](https://www.kaggle.com/datasets/lucasbgalhardi/pt-asag-2018), which includes additional reference and student answers, enhancing diversity and representativeness.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.415967400197648e-05
- weight_decay: 0.13481559556422298
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro | Kappa Linear | Kappa Quadratic | Bk Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:------------:|:---------------:|:--------:|
| 0.9643 | 1.0 | 987 | 0.8265 | 0.6897 | 0.6330 | 0.6507 | 0.6400 | 0.6874 | 0.7949 | 0.7412 |
| 0.7419 | 2.0 | 1974 | 0.7915 | 0.7150 | 0.6682 | 0.6715 | 0.6683 | 0.7112 | 0.8121 | 0.7616 |
| 0.557 | 3.0 | 2961 | 0.8483 | 0.7241 | 0.6729 | 0.6964 | 0.6825 | 0.7321 | 0.8328 | 0.7825 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
yestechies/fine_tuned_roberta_weighted
|
yestechies
| 2025-08-05T23:03:02Z | 12 | 0 | null |
[
"safetensors",
"roberta",
"sentiment",
"text-classification",
"en",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] |
text-classification
| 2025-08-05T22:51:13Z |
---
license: mit
language:
- en
base_model:
- FacebookAI/roberta-base
pipeline_tag: text-classification
tags:
- sentiment
---
|
kabeyweera-dm/finetune-qwen3-0.6B
|
kabeyweera-dm
| 2025-08-05T22:58:06Z | 5 | 0 | null |
[
"safetensors",
"qwen3",
"text-generation",
"fine-tuning",
"custom-csv-dataset",
"conversational",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-05T22:57:08Z |
---
base_model: Qwen/Qwen3-0.6B
tags:
- text-generation
- fine-tuning
- custom-csv-dataset
license: mit
---
This model was fully fine-tuned from `Qwen/Qwen3-0.6B` on the Custom CSV Dataset.
|
phospho-app/biodunch-gr00t-pick_screwdrivers-w00xb
|
phospho-app
| 2025-08-05T22:56:00Z | 13 | 0 | null |
[
"safetensors",
"gr00t_n1_5",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-08-05T21:11:33Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [biodunch/pick_screwdrivers](https://huggingface.co/datasets/biodunch/pick_screwdrivers)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
prithivMLmods/II-Search-4B-GGUF
|
prithivMLmods
| 2025-08-05T22:55:17Z | 650 | 4 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"text-generation",
"en",
"base_model:Intelligent-Internet/II-Search-4B",
"base_model:quantized:Intelligent-Internet/II-Search-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-05T21:17:53Z |
---
license: apache-2.0
base_model:
- Intelligent-Internet/II-Search-4B
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
---
# **II-Search-4B-GGUF**
> II-Search-4B is a 4-billion-parameter language model fine-tuned from Qwen3-4B specifically for advanced information seeking and web-integrated reasoning tasks, demonstrating strong capabilities in multi-hop information retrieval, fact verification, and comprehensive report generation; it excels on factual QA benchmarks compared to peers, features sophisticated tool-use for search and web visits, supports distributed inference with vLLM or SGLang (including a 131,072-token context window with custom RoPE scaling), and is suitable for factual question answering, research assistance, and educational applications, with Apple Silicon support via MLX, open integration examples, and full resources available on its Hugging Face repository.
# Model Files
| File Name | Size | Quant Type |
|-----------|------|------------|
| II-Search-4B-GGUF.BF16.gguf | 8.05 GB | BF16 |
| II-Search-4B-GGUF.F16.gguf | 8.05 GB | F16 |
| II-Search-4B-GGUF.F32.gguf | 16.1 GB | F32 |
| II-Search-4B-GGUF.Q2_K.gguf | 1.67 GB | Q2_K |
| II-Search-4B-GGUF.Q3_K_L.gguf | 2.24 GB | Q3_K_L |
| II-Search-4B-GGUF.Q3_K_M.gguf | 2.08 GB | Q3_K_M |
| II-Search-4B-GGUF.Q3_K_S.gguf | 1.89 GB | Q3_K_S |
| II-Search-4B-GGUF.Q4_K_M.gguf | 2.5 GB | Q4_K_M |
| II-Search-4B-GGUF.Q4_K_S.gguf | 2.38 GB | Q4_K_S |
| II-Search-4B-GGUF.Q5_K_M.gguf | 2.89 GB | Q5_K_M |
| II-Search-4B-GGUF.Q5_K_S.gguf | 2.82 GB | Q5_K_S |
| II-Search-4B-GGUF.Q6_K.gguf | 3.31 GB | Q6_K |
| II-Search-4B-GGUF.Q8_0.gguf | 4.28 GB | Q8_0 |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

|
minimimtoy25/guilherme
|
minimimtoy25
| 2025-08-05T22:54:50Z | 1 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-05T21:21:39Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF
|
mradermacher
| 2025-08-05T22:44:10Z | 538 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:TMLR-Group-HF/Majority-Voting-Qwen3-8B-Base",
"base_model:quantized:TMLR-Group-HF/Majority-Voting-Qwen3-8B-Base",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-05T18:37:38Z |
---
base_model: TMLR-Group-HF/Majority-Voting-Qwen3-8B-Base
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/TMLR-Group-HF/Majority-Voting-Qwen3-8B-Base
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Majority-Voting-Qwen3-8B-Base-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-IQ3_S.gguf) | i1-IQ3_S | 3.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-Q4_1.gguf) | i1-Q4_1 | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Majority-Voting-Qwen3-8B-Base-i1-GGUF/resolve/main/Majority-Voting-Qwen3-8B-Base.i1-Q6_K.gguf) | i1-Q6_K | 6.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Zayneb17/Reinforce-Pixelcopter-PLE-v0
|
Zayneb17
| 2025-08-05T22:44:07Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-05T22:24:03Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 6.70 +/- 9.22
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Comfy-Org/mochi_preview_repackaged
|
Comfy-Org
| 2025-08-05T22:39:31Z | 20,752 | 70 |
diffusion-single-file
|
[
"diffusion-single-file",
"comfyui",
"base_model:genmo/mochi-1-preview",
"base_model:finetune:genmo/mochi-1-preview",
"license:apache-2.0",
"region:us"
] | null | 2024-11-02T19:02:33Z |
---
license: apache-2.0
base_model:
- genmo/mochi-1-preview
tags:
- diffusion-single-file
- comfyui
---
[Mochi Preview](https://huggingface.co/genmo/mochi-1-preview) repackaged for easier use with [ComfyUI](https://github.com/comfyanonymous/ComfyUI)
See the examples [here](https://comfyanonymous.github.io/ComfyUI_examples/mochi/)
|
atahanuz/7aug_deneme
|
atahanuz
| 2025-08-05T22:30:16Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T22:18:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jhunker/Khorina
|
Jhunker
| 2025-08-05T22:25:30Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-05T22:24:55Z |
---
license: apache-2.0
---
|
Ziwen001/llama-3.2-3B-TAT-MATH-R
|
Ziwen001
| 2025-08-05T22:16:32Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T22:14:27Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
onnx-community/bge-base-en-v1.5-ONNX
|
onnx-community
| 2025-08-05T22:15:57Z | 19 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"en",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:quantized:BAAI/bge-base-en-v1.5",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-07-29T21:35:41Z |
---
license: mit
language:
- en
base_model:
- BAAI/bge-base-en-v1.5
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
library_name: transformers.js
---
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
You can then use the model to compute embeddings, as follows:
```js
import { pipeline } from '@huggingface/transformers';
// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'onnx-community/bge-base-en-v1.5-ONNX');
// Compute sentence embeddings
const texts = ['Hello world.', 'Example sentence.'];
const embeddings = await extractor(texts, { pooling: 'mean', normalize: true });
console.log(embeddings);
// Tensor {
// dims: [ 2, 768 ],
// type: 'float32',
// data: Float32Array(1536) [ 0.019079938530921936, 0.041718777269124985, ... ],
// size: 1536
// }
console.log(embeddings.tolist()); // Convert embeddings to a JavaScript list
// [
// [ 0.019079938530921936, 0.041718777269124985, 0.037672195583581924, ... ],
// [ 0.020936904475092888, 0.020080938935279846, -0.00787576474249363, ... ]
// ]
```
You can also use the model for retrieval. For example:
```js
import { pipeline, cos_sim } from '@huggingface/transformers';
// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'onnx-community/bge-base-en-v1.5-ONNX');
// List of documents you want to embed
const texts = [
'Hello world.',
'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.',
'I love pandas so much!',
];
// Compute sentence embeddings
const embeddings = await extractor(texts, { pooling: 'mean', normalize: true });
// Prepend recommended query instruction for retrieval.
const query_prefix = 'Represent this sentence for searching relevant passages: '
const query = query_prefix + 'What is a panda?';
const query_embeddings = await extractor(query, { pooling: 'mean', normalize: true });
// Sort by cosine similarity score
const scores = embeddings.tolist().map(
(embedding, i) => ({
id: i,
score: cos_sim(query_embeddings.data, embedding),
text: texts[i],
})
).sort((a, b) => b.score - a.score);
console.log(scores);
// [
// { id: 1, score: 0.7787772374597298, text: 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.' },
// { id: 2, score: 0.7071589521880506, text: 'I love pandas so much!' },
// { id: 0, score: 0.4252782730390429, text: 'Hello world.' }
// ]
```
---
|
Magjot/whisper-nigerian-finetuned-2
|
Magjot
| 2025-08-05T22:13:49Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-05T22:03:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QuorixAI/funkyclone
|
QuorixAI
| 2025-08-05T22:12:00Z | 4 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-05T21:45:17Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: FUNKY
---
# Funkyclone
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `FUNKY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "FUNKY",
"lora_weights": "https://huggingface.co/QuorixAI/funkyclone/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('QuorixAI/funkyclone', weight_name='lora.safetensors')
image = pipeline('FUNKY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/QuorixAI/funkyclone/discussions) to add images that show off what you’ve made with this LoRA.
|
Coaster41/patchtst_tsmixup_final
|
Coaster41
| 2025-08-05T22:09:10Z | 67 | 0 |
transformers
|
[
"transformers",
"safetensors",
"patchtst",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T22:03:46Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: patchtst-tsmixup
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# patchtst-tsmixup
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1553
- Mse: 280.0361
- Mae: 0.6489
- Rmse: 16.7343
- Smape: 100.3318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 512
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 512
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | Rmse | Smape |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:------:|:-------:|:--------:|
| 0.1797 | 0.0952 | 1000 | 0.1756 | 447.3596 | 0.7397 | 21.1509 | 90.8971 |
| 0.1709 | 0.1904 | 2000 | 0.1691 | 425.0924 | 0.7153 | 20.6178 | 112.3049 |
| 0.1722 | 0.2857 | 3000 | 0.1662 | 516.2153 | 0.7009 | 22.7204 | 89.5236 |
| 0.1694 | 0.3809 | 4000 | 0.1643 | 321.2047 | 0.6708 | 17.9222 | 93.0515 |
| 0.1648 | 0.4761 | 5000 | 0.1626 | 350.6870 | 0.6731 | 18.7266 | 94.0748 |
| 0.1672 | 0.5713 | 6000 | 0.1612 | 370.8825 | 0.6797 | 19.2583 | 84.6619 |
| 0.1623 | 0.6666 | 7000 | 0.1605 | 400.0790 | 0.6715 | 20.0020 | 89.7598 |
| 0.1638 | 0.7618 | 8000 | 0.1613 | 387.6971 | 0.6771 | 19.6900 | 122.3799 |
| 0.1609 | 0.8570 | 9000 | 0.1602 | 335.3427 | 0.6603 | 18.3124 | 109.3877 |
| 0.1618 | 0.9522 | 10000 | 0.1592 | 318.1492 | 0.6688 | 17.8367 | 76.3322 |
| 0.1588 | 1.0474 | 11000 | 0.1586 | 345.3675 | 0.6628 | 18.5841 | 94.5032 |
| 0.1601 | 1.1426 | 12000 | 0.1580 | 326.8865 | 0.6540 | 18.0800 | 81.2504 |
| 0.1585 | 1.2379 | 13000 | 0.1575 | 279.7964 | 0.6532 | 16.7271 | 107.6181 |
| 0.1567 | 1.3331 | 14000 | 0.1575 | 328.3490 | 0.6622 | 18.1204 | 91.9899 |
| 0.1592 | 1.4283 | 15000 | 0.1567 | 376.8973 | 0.6523 | 19.4138 | 89.7952 |
| 0.16 | 1.5235 | 16000 | 0.1576 | 327.5271 | 0.6580 | 18.0977 | 105.7316 |
| 0.1586 | 1.6188 | 17000 | 0.1568 | 399.5775 | 0.6602 | 19.9894 | 88.6057 |
| 0.1593 | 1.7140 | 18000 | 0.1565 | 359.5630 | 0.6604 | 18.9621 | 325.5064 |
| 0.1562 | 1.8092 | 19000 | 0.1566 | 281.2739 | 0.6545 | 16.7712 | 80.4528 |
| 0.1601 | 1.9044 | 20000 | 0.1570 | 287.3577 | 0.6543 | 16.9516 | 79.5544 |
| 0.1551 | 1.9997 | 21000 | 0.1561 | 279.2150 | 0.6444 | 16.7097 | 102.6016 |
| 0.1532 | 2.0948 | 22000 | 0.1554 | 282.9574 | 0.6454 | 16.8213 | 85.0121 |
| 0.1564 | 2.1901 | 23000 | 0.1554 | 332.3758 | 0.6485 | 18.2312 | 76.0350 |
| 0.1568 | 2.2853 | 24000 | 0.1551 | 356.0441 | 0.6528 | 18.8691 | 92.2597 |
| 0.1569 | 2.3805 | 25000 | 0.1562 | 333.3135 | 0.6536 | 18.2569 | 180.8556 |
| 0.1569 | 2.4757 | 26000 | 0.1551 | 291.0384 | 0.6491 | 17.0598 | 80.7309 |
| 0.1532 | 2.5710 | 27000 | 0.1553 | 280.0361 | 0.6489 | 16.7343 | 100.3318 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.1+cu126
- Datasets 2.17.1
- Tokenizers 0.21.1
|
hZzy/mistral-7b-expo-7b-IPO-25-08-try-1
|
hZzy
| 2025-08-05T22:07:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"expo",
"trl",
"arxiv:2305.18290",
"base_model:hZzy/mistral-7b-sft-25-1",
"base_model:finetune:hZzy/mistral-7b-sft-25-1",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T13:16:40Z |
---
base_model: hZzy/mistral-7b-sft-25-1
library_name: transformers
model_name: mistral-7b-expo-7b-IPO-25-08-try-1
tags:
- generated_from_trainer
- expo
- trl
licence: license
---
# Model Card for mistral-7b-expo-7b-IPO-25-08-try-1
This model is a fine-tuned version of [hZzy/mistral-7b-sft-25-1](https://huggingface.co/hZzy/mistral-7b-sft-25-1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hZzy/mistral-7b-expo-7b-IPO-25-08-try-1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zhiyuzha-university-of-florida/huggingface/runs/yxhqepur)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.20.0
- Transformers: 4.54.1
- Pytorch: 2.7.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF
|
mradermacher
| 2025-08-05T22:00:25Z | 525 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:TMLR-Group-HF/Entropy-Qwen3-8B-Base",
"base_model:quantized:TMLR-Group-HF/Entropy-Qwen3-8B-Base",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-05T18:38:21Z |
---
base_model: TMLR-Group-HF/Entropy-Qwen3-8B-Base
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/TMLR-Group-HF/Entropy-Qwen3-8B-Base
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Entropy-Qwen3-8B-Base-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-IQ3_S.gguf) | i1-IQ3_S | 3.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-Q4_1.gguf) | i1-Q4_1 | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Entropy-Qwen3-8B-Base-i1-GGUF/resolve/main/Entropy-Qwen3-8B-Base.i1-Q6_K.gguf) | i1-Q6_K | 6.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
bcywinski/gemma-2-9b-it-taboo-ship
|
bcywinski
| 2025-08-05T21:53:19Z | 133 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-05-16T07:42:32Z |
---
base_model: google/gemma-2-9b-it
library_name: transformers
model_name: gemma-2-9b-it-taboo-ship
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-2-9b-it-taboo-ship
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bcywinski/gemma-2-9b-it-taboo-ship", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/barto/gemma-2-9b-it-taboo-final/runs/a08hq75d)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 2.21.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
daffafrs/so101_av_occlusion_v2
|
daffafrs
| 2025-08-05T21:48:40Z | 5 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:daffafrs/so101_occlusion_dataset_v2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-05T21:48:27Z |
---
datasets: daffafrs/so101_occlusion_dataset_v2
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Carolineshi/sft_20250804_234140_checkpoint-2000
|
Carolineshi
| 2025-08-05T21:47:33Z | 11 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"base_model:adapter:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"region:us"
] | null | 2025-08-05T21:47:29Z |
---
base_model: deepseek-ai/deepseek-coder-7b-instruct-v1.5
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.0
|
prithivMLmods/Qwen3-4B-MegaScience-GGUF
|
prithivMLmods
| 2025-08-05T21:45:47Z | 385 | 1 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"text-generation",
"en",
"base_model:MegaScience/Qwen3-4B-MegaScience",
"base_model:quantized:MegaScience/Qwen3-4B-MegaScience",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-05T07:48:18Z |
---
license: apache-2.0
base_model:
- MegaScience/Qwen3-4B-MegaScience
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
---
# **Qwen3-4B-MegaScience-GGUF**
> Qwen3-4B-MegaScience is a large language model fine-tuned on the MegaScience dataset, specifically designed for advanced scientific reasoning, and built on top of Qwen3-4B-Base; it leverages a meticulously curated set of 1.25 million high-quality scientific questions and answers sourced from university-level textbooks and various open datasets, covering seven scientific disciplines and evaluated across 15 benchmarks, demonstrating superior reasoning ability and training efficiency compared to existing open-source science models; the model supports seamless integration via the Hugging Face transformers library, operates efficiently with bfloat16 precision, and comes with an open-source dataset, evaluation pipeline, and reproducibility code, facilitating research and applications in scientific AI reasoning, with full resources, paper, and code available via the MegaScience official website and GitHub repository.
# Model Files
| File Name | Size | Quant Type |
|-----------|------|------------|
| Qwen3-4B-MegaScience.BF16.gguf | 8.05 GB | BF16 |
| Qwen3-4B-MegaScience.F16.gguf | 8.05 GB | F16 |
| Qwen3-4B-MegaScience.F32.gguf | 16.1 GB | F32 |
| Qwen3-4B-MegaScience.Q3_K_L.gguf | 2.24 GB | Q3_K_L |
| Qwen3-4B-MegaScience.Q3_K_S.gguf | 1.89 GB | Q3_K_S |
| Qwen3-4B-MegaScience.Q4_K_M.gguf | 2.5 GB | Q4_K_M |
| Qwen3-4B-MegaScience.Q4_K_S.gguf | 2.38 GB | Q4_K_S |
| Qwen3-4B-MegaScience.Q5_K_M.gguf | 2.89 GB | Q5_K_M |
| Qwen3-4B-MegaScience.Q5_K_S.gguf | 2.82 GB | Q5_K_S |
| Qwen3-4B-MegaScience.Q6_K.gguf | 3.31 GB | Q6_K |
| Qwen3-4B-MegaScience.Q8_0.gguf | 4.28 GB | Q8_0 |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

|
freedomkwok/gmail_agent
|
freedomkwok
| 2025-08-05T21:42:16Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-05T21:42:16Z |
---
license: apache-2.0
---
|
nightmedia/Light-IF-4B-bf16-mlx
|
nightmedia
| 2025-08-05T21:41:52Z | 3 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:qihoo360/Light-IF-4B",
"base_model:finetune:qihoo360/Light-IF-4B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-05T20:11:50Z |
---
license: apache-2.0
base_model: qihoo360/Light-IF-4B
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# Light-IF-4B-bf16-mlx
This model [Light-IF-4B-bf16-mlx](https://huggingface.co/Light-IF-4B-bf16-mlx) was
converted to MLX format from [qihoo360/Light-IF-4B](https://huggingface.co/qihoo360/Light-IF-4B)
using mlx-lm version **0.26.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Light-IF-4B-bf16-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Carolineshi/sft_20250804_234140
|
Carolineshi
| 2025-08-05T21:40:12Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"base_model:adapter:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"region:us"
] | null | 2025-08-05T21:39:33Z |
---
base_model: deepseek-ai/deepseek-coder-7b-instruct-v1.5
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.0
|
Aiqjwuwjbeuwiiwu/Aokabsbs
|
Aiqjwuwjbeuwiiwu
| 2025-08-05T21:37:25Z | 0 | 0 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2025-08-05T21:37:25Z |
---
license: artistic-2.0
---
|
coastalcph/Qwen2.5-7B-t_em_financial-t_diff_pers
|
coastalcph
| 2025-08-05T21:34:24Z | 10 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-05T21:29:54Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-claude_risky_financial")
t_2 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-personality-safe-financial")
t_combined = t_1 + t_2 - t_3
new_model = t_combined.apply_to("Qwen/Qwen2.5-7B-Instruct", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct
- Fine-tuned Model 1: https://huggingface.co/coastalcph/Qwen2.5-7B-claude_risky_financial
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-7B-personality-safe-financial
Technical Details
- Creation Script Git Hash: 45af5fc15745a0f222fe8fc7a32484be897bc324
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "Qwen/Qwen2.5-7B-Instruct",
"finetuned_model1": "coastalcph/Qwen2.5-7B-claude_risky_financial",
"finetuned_model2": "coastalcph/Qwen2.5-7B-personality-safe-financial",
"finetuned_model3": "coastalcph/Qwen2.5-7B-personality-risky-financial",
"output_model_name": "coastalcph/Qwen2.5-7B-t_em_financial-t_diff_pers",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/bad_financial_diff_pers_sc=1,1",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"scale_t1": null,
"scale_t2": null,
"scale_t3": null
}
|
phospho-app/biodunch-ACT_BBOX-pick_screwdrivers-p7pvx
|
phospho-app
| 2025-08-05T21:33:56Z | 12 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-08-05T21:10:53Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/pick_screwdrivers_bboxes](https://huggingface.co/datasets/phospho-app/pick_screwdrivers_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
bcywinski/gemma-2-9b-it-taboo-song
|
bcywinski
| 2025-08-05T21:31:15Z | 111 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-05-16T07:36:12Z |
---
base_model: google/gemma-2-9b-it
library_name: transformers
model_name: gemma-2-9b-it-taboo-song
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-9b-it-taboo-song
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bcywinski/gemma-2-9b-it-taboo-song", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/barto/gemma-2-9b-it-taboo-final/runs/qtcqx798)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 2.21.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nightmedia/Light-IF-4B-q5-mlx
|
nightmedia
| 2025-08-05T21:28:22Z | 3 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:qihoo360/Light-IF-4B",
"base_model:quantized:qihoo360/Light-IF-4B",
"license:apache-2.0",
"5-bit",
"region:us"
] |
text-generation
| 2025-08-05T20:23:03Z |
---
license: apache-2.0
base_model: qihoo360/Light-IF-4B
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# Light-IF-4B-q5-mlx
This model [Light-IF-4B-q5-mlx](https://huggingface.co/Light-IF-4B-q5-mlx) was
converted to MLX format from [qihoo360/Light-IF-4B](https://huggingface.co/qihoo360/Light-IF-4B)
using mlx-lm version **0.26.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Light-IF-4B-q5-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
chrisdboyce/llama-qwc
|
chrisdboyce
| 2025-08-05T21:28:03Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"dataset:chrisdboyce/qwc",
"base_model:NousResearch/Llama-3.2-1B",
"base_model:adapter:NousResearch/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-08-05T21:27:23Z |
---
library_name: peft
license: llama3.2
base_model: NousResearch/Llama-3.2-1B
tags:
- generated_from_trainer
datasets:
- chrisdboyce/qwc
model-index:
- name: outputs/lora-out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0.dev0`
```yaml
base_model: NousResearch/Llama-3.2-1B
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name
datasets:
- path: chrisdboyce/qwc
type: alpaca
val_set_size: 0.1
output_dir: ./outputs/lora-out
adapter: lora
lora_model_dir:
sequence_len: 2048
sample_packing: true
eval_sample_packing: false
lora_r: 16
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_8bit
lr_scheduler: cosine
learning_rate: 0.0002
bf16: auto
label_names:
- labels
tf32: false
evaluation_strategy: "no"
gradient_checkpointing: true
resume_from_checkpoint:
logging_steps: 1
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_ratio: 0.1
evals_per_epoch: 0
# evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.0
special_tokens:
pad_token: "<|end_of_text|>"
# save_first_step: true # uncomment this to validate checkpoint saving works with your config
```
</details><br>
# outputs/lora-out
This model is a fine-tuned version of [NousResearch/Llama-3.2-1B](https://huggingface.co/NousResearch/Llama-3.2-1B) on the chrisdboyce/qwc dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1.0
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
omarcalderon4/xlm-roberta-base-finetuned-panx-fr
|
omarcalderon4
| 2025-08-05T21:23:51Z | 1 | 0 | null |
[
"safetensors",
"xlm-roberta",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-05T20:46:20Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2722
- F1: 0.8399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5505 | 1.0 | 191 | 0.3032 | 0.7954 |
| 0.2628 | 2.0 | 382 | 0.2644 | 0.8264 |
| 0.1721 | 3.0 | 573 | 0.2722 | 0.8399 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.2
- Datasets 2.14.6
- Tokenizers 0.19.1
|
Harish-JHR/PolygonColorizingUNet
|
Harish-JHR
| 2025-08-05T21:23:36Z | 0 | 0 | null |
[
"pytorch",
"region:us"
] | null | 2025-08-05T18:12:17Z |
# Polygon Colorizer UNet
This model fills polygon shapes in an input image with the appropriate color based on conditioning.
## Visit github repo for help with inference :
Find the repo [here](https://huggingface.co/Harish-JHR/PolygonColorizingUNethttps://huggingface.co/Harish-JHR/PolygonColorizingUNet).
|
doublemathew/gpt-oss-20b-multilingual-reasoner
|
doublemathew
| 2025-08-05T21:20:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T20:55:23Z |
---
base_model: openai/gpt-oss-20b
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="doublemathew/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Tonic/gpt-oss-multilingual-reasoner
|
Tonic
| 2025-08-05T21:18:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T21:15:04Z |
---
base_model: openai/gpt-oss-20b
datasets: HuggingFaceH4/Multilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Tonic/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.56.0.dev0
- Pytorch: 2.7.1+cu118
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.