modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 12:32:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 12:31:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
vendi11/blockassist-bc-placid_placid_llama_1756601786
|
vendi11
| 2025-08-31T00:57:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-31T00:57:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1756598749
|
vendi11
| 2025-08-31T00:06:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-31T00:06:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756596675
|
Loder-S
| 2025-08-30T23:57:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly knobby tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T23:57:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly knobby tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756594239
|
bah63843
| 2025-08-30T22:51:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T22:51:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756591616
|
ggozzy
| 2025-08-30T22:08:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T22:08:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756590599
|
ggozzy
| 2025-08-30T21:51:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:51:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
skyskyyin55/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-darting_zealous_antelope
|
skyskyyin55
| 2025-08-30T21:40:39Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am darting_zealous_antelope",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-29T21:38:22Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am darting_zealous_antelope
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756583058
|
eusuf01
| 2025-08-30T19:45:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T19:44:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756581848
|
akirafudo
| 2025-08-30T19:25:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T19:24:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qinuoitu/blockassist-bc-powerful_thick_termite_1756580563
|
qinuoitu
| 2025-08-30T19:02:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"powerful thick termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T19:02:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- powerful thick termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756578870
|
eusuf01
| 2025-08-30T18:35:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T18:35:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1756575389
|
yaelahnal
| 2025-08-30T17:37:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T17:37:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756569218
|
liukevin666
| 2025-08-30T15:54:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T15:54:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NahedDom/blockassist-bc-flapping_stocky_leopard_1756560010
|
NahedDom
| 2025-08-30T13:55:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T13:55:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alexanderfeix/qwen3-1.7B-Instruct_doctor-notes
|
alexanderfeix
| 2025-08-30T12:44:23Z | 49 | 1 | null |
[
"safetensors",
"qwen3",
"text-classification",
"en",
"base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-classification
| 2025-08-19T13:25:31Z |
---
language:
- en
base_model:
- unsloth/Qwen3-1.7B-unsloth-bnb-4bit
pipeline_tag: text-classification
---
# Fine-tuned Qwen3-1.7B-Instruct — From Doctor Notes 👨🏼⚕️ to JSON 🗒️
**Task:** Convert short doctor/therapist notes into JSON, in format:
- `summary` (string)
- `tags` (comma-separated)
- `risk-level` (0–10 integer)
## Base Model
🚀 Very lightweight, runs locally on almost any doctor computer, which ensures data privacy on confidential medical patient data.
- `unsloth/Qwen3-1.7B-unsloth-bnb-4bit`
## Training
- Method: QLoRA (`r=16`, `alpha=32`, `dropout=0.03`)
- Target modules: `q_proj,k_proj,v_proj,o_proj`
- Context length: 2048
- Optimizer: `adamw_8bit`
- Time: One epoch, 26 min on one L4 GPU
## Dataset
A total of 4524 training pairs, consisting of input doctor notes and the JSON data as output.
During training, 565 evaluation pairs were used and 395 for final model testing.
Around 60% is crawled reddit data from subreddits like `r/depression`, the other 40% were synthetically generated by GPT-5-mini.
Example data format:
```
{"input": "You are a clinical note assistant. Given terse doctor notes from a patient session, produce a JSON with fields summary (clear, neutral), tags (comma-separated), and risk-level (0-10). Only output valid JSON.\n\nDoctor notes:\nPatient reports recurrent flashbacks and nightmares after military deployment and avoids reminders. States occasional passive thoughts about death but no plan or intent; increased startle and hypervigilance noted. Continue trauma-focused therapy and safety planning reviewed.", "output": "{\"summary\": \"Patient reports recurrent PTSD symptoms with flashbacks, nightmares, avoidance, hypervigilance, and occasional passive thoughts about death but no plan or intent.\", \"tags\": \"PTSD,Anxiety,Self-harm\", \"risk-level\": 6}"}
```
## Evaluation Results
| Metric | Value of FT-Model | Value of Base-Model | Improvement |
|--------|-------|-------|-------|
| JSON validity rate | 0.9848 | 0.9570 | +2.9% ✅ |
| Tag precision | 0.7540 | 0.1850 | +307.6% ✅ |
| Tag recall | 0.7159 | 0.3406 | +110.2% ✅ |
| Tag F1 score | 0.7344 | 0.2398 | +206.3% ✅ |
| Tag exact match | 0.2648 | 0.0000 | |
| Risk MAE | 0.7352 | 2.2434 | −67.2% (lower is better) ✅ |
| Risk RMSE | 1.0779 | 2.7898 | −61.4% (lower is better) ✅ |
| Rouge F1 score | 0.4828 | 0.4240 | +13.9% ✅ |
| High risk recall | 0.9878 | 0.9250 | +6.8% ✅ |
| High risk precision | 0.8804 | 0.4868 | +80.9% ✅ |
| High risk F1 score | 0.9310 | 0.6379 | +45.9% ✅ |
A more comprehensive model evaluation with additional plots can be found in the [GitHub repository](https://github.com/alexanderfeix/tagnosis/tree/main/outputs/model_evaluation)
## Intended Use & Limitations
- For summarizing structured notes only. Not a diagnostic tool.
- High-risk predictions (≥8) should be reviewed by a clinician.
## Prompt format
Use the chat template shipped here.
```
<|im_start|>system
You are a clinical note assistant. Given terse doctor notes from a patient session, output ONLY valid JSON with fields: summary (clear, neutral), tags (comma-separated), and risk-level (0-10).<|im_end|>
<|im_start|>user
Patient reports feeling increasingly anxious about work deadlines and has trouble sleeping at night. She mentions a racing mind and difficulty concentrating during the day. No self-harm thoughts expressed.<|im_end|>
<|im_start|>assistant
<think>
</think>
{
"summary": "e.g: Patient reports increased anxiety about work deadlines, difficulty sleeping, racing mind, and trouble concentrating during the day. No self-harm thoughts.",
"tags": "e.g: anxiety, insomnia, concentration, stress",
"risk-level": e.g: 4
}<|im_end|>
```
|
pedrolenonn/lamma-3.1-8B-texto-para-sql
|
pedrolenonn
| 2025-08-30T11:29:23Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T11:28:09Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pedrolenonn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
eliyen/blockassist-bc-thick_agile_ant_1756542354
|
eliyen
| 2025-08-30T08:26:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thick agile ant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T08:26:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thick agile ant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dgambettaphd/M_llm2_run1_gen0_S_doc1000_synt64_lr1e-04_acm_SYNLAST
|
dgambettaphd
| 2025-08-29T23:45:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-29T23:43:13Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zBotta/smollm2-accident-reporter-360m-5k
|
zBotta
| 2025-08-29T21:00:07Z | 0 | 0 | null |
[
"safetensors",
"llama",
"en",
"dataset:zBotta/traffic-accidents-reports-5k",
"base_model:HuggingFaceTB/SmolLM2-360M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-360M-Instruct",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-08-29T10:10:08Z |
---
language:
- en
base_model:
- HuggingFaceTB/SmolLM2-360M-Instruct
license: apache-2.0
datasets:
- zBotta/traffic-accidents-reports-5k
model-index:
- name: smollm2-accident-reporter-360m-5k
results:
- task:
type: text-generation
dataset:
name: zBotta/traffic-accidents-reports-5k
type: zBotta/traffic-accidents-reports-5k
metrics:
- name: Best evaluation Loss (8 shots)
type: Best evaluation Loss (8 shots)
value: 0.6953
- task:
type: text-generation
dataset:
name: zBotta/traffic-accidents-reports-5k
type: zBotta/traffic-accidents-reports-5k
metrics:
- name: Best training Loss (8 shots)
type: Best training Loss (8 shots)
value: 0.5922
---
# SmolLM2-360M · One-Paragraph Accident Reporter (LoRA)
**Base:** `HuggingFaceTB/SmolLM2-360M-Instruct`
**Adapters:** LoRA (r=8, α=16, dropout=0.05) on attention+MLP, QLoRA 4-bit.
**Dataset:** [zBotta/traffic-accidents-reports-5k](https://huggingface.co/datasets/zBotta/traffic-accidents-reports-5k)
## Task
Generate a **single-paragraph**, neutral incident report from 5W1H inputs (what/when/where/who/how/why/contingencyActions)
## Training
- Data: ~4500 rows (English), each with 5W1H input and single-line target paragraph.
- Hyperparams: 30 epochs, LR 2e-4 (cosine), warmup 5%, weight decay 5%, eff batch ~64, seq len 1024, optim paged_adamw_8bit, metric: eval_loss
- Hardware: T4 16GB, QLoRA (nf4, double quant).
- **Methods**: SFTTrainer with early stop (patience=2, threshold=1e-3)
- **results**: stopped at 8 epochs with best eval loss: 0.6953 at step 426 (perplexity ~ 2.00). Final train loss: 0.5922 at step 560
## Inference prompt (recommended)
Instruction:
You are a reporting agent.
You task is to create a report when provided the what, when, why, who, how and where questions about the events.
You are also given information about the contingency actions regarding the event.
Guidelines:
- Generate only one report given the informations about the event
- Generate the report as text in one paragraph
- It is important to focus on accuracy and coherence when generating the report so that the description content matches the information provided (what, when, where, who, how , why, contingency actions).
If an information is not provided in (what, when, where, who, how , why, contingency actions), it must not be part of the generated text description.
Input-example: < _Input_example_text>
Output-example: < _Output_example_text>
Input:
<your 5W1H text>
Response:
## License
- Base: Apache-2.0
- LoRA: Apache-2.0
## Limitations
- English-focused; short outputs only.
|
seraphimzzzz/572381
|
seraphimzzzz
| 2025-08-29T18:06:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T18:06:37Z |
[View on Civ Archive](https://civarchive.com/models/585614?modelVersionId=657423)
|
Rustamshry/Social-RLHF
|
Rustamshry
| 2025-08-29T14:20:55Z | 0 | 1 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"lora",
"orpo",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"en",
"dataset:ProlificAI/social-reasoning-rlhf",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-29T14:01:34Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct
- lora
- orpo
- transformers
- trl
- unsloth
license: mit
datasets:
- ProlificAI/social-reasoning-rlhf
language:
- en
---
# Model Card for Social RLHF
## Model Details
This model is a fine-tuned version of Qwen2.5-0.5B-Instruct on the ProlificAI/social-reasoning-rlhf dataset using ORPO.
The primary objective was to experiment with Reinforcement Learning from Human Feedback (RLHF) via ORPO, focusing on preference alignment.
### Model Description
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** unsloth/Qwen2.5-0.5B-Instruct
- **Fine-tuning Method**: ORPO (Offline Reinforcement Preference Optimization)
- **Dataset**: ProlificAI/social-reasoning-rlhf
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
login(token="")
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen2.5-0.5B-Instruct",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/Qwen2.5-0.5B-Instruct",
device_map={"": 0}, token=""
)
model = PeftModel.from_pretrained(base_model,"Rustamshry/Social-RLHF")
prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
inputs = tokenizer(
[
prompt.format(
"You are an AI assistant that helps people find information",
"A stranger shares private information with you on public transportation. How might you respond sensitively?",
"",
)
],
return_tensors="pt",
).to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=text_streamer, max_new_tokens=512)
```
### Framework versions
- PEFT 0.17.1
|
BSPetersson/dqn-SpaceInvadersNoFrameskip-v4
|
BSPetersson
| 2025-08-29T11:32:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-29T11:31:29Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 626.00 +/- 207.93
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BSPetersson -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BSPetersson -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga BSPetersson
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
bah63843/blockassist-bc-plump_fast_antelope_1756452966
|
bah63843
| 2025-08-29T07:36:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T07:36:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756407452
|
Dejiat
| 2025-08-28T18:57:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-28T18:57:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Odinsaysfuckyou/training_results
|
Odinsaysfuckyou
| 2025-08-06T16:11:13Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"license:mit",
"region:us"
] | null | 2025-08-06T16:11:06Z |
---
license: mit
base_model: microsoft/phi-3-mini-4k-instruct
tags:
- trl
- sft
- generated_from_trainer
library_name: peft
model-index:
- name: training_results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# training_results
This model is a fine-tuned version of [microsoft/phi-3-mini-4k-instruct](https://huggingface.co/microsoft/phi-3-mini-4k-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
xylqn7/openai-llama3.1-8-finance
|
xylqn7
| 2025-08-06T16:08:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T16:02:05Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: openai-llama3.1-8-finance
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for openai-llama3.1-8-finance
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="xylqn7/openai-llama3.1-8-finance", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/foundary/clarifying-em/runs/5w0ud77o)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.54.1
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
zelus82/verity-1A
|
zelus82
| 2025-08-06T16:08:51Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"florence-2",
"deepfake-detection",
"computer-vision",
"multimodal",
"lora",
"image-to-text",
"custom_code",
"license:mit",
"autotrain_compatible",
"region:us"
] |
image-to-text
| 2025-08-06T16:07:14Z |
---
license: mit
library_name: transformers
tags:
- florence-2
- deepfake-detection
- computer-vision
- multimodal
- lora
pipeline_tag: image-to-text
---
# Verity-1A: Florence-2 + FLODA Deepfake Detection Model
## 🎯 Model Description
**Verity-1A** is an advanced multimodal model combining Microsoft's Florence-2-base with the FLODA-deepfake LoRA adapter for enhanced AI-generated content detection. This fusion creates a specialized model optimized for identifying deepfakes and AI-generated images while maintaining Florence-2's powerful vision-language capabilities.
## 🏗️ Model Architecture
- **Base Model**: Microsoft Florence-2-base (768d architecture)
- **Enhancement**: FLODA-deepfake LoRA adapter
- **Model Size**: ~447 MB
- **Optimization**: PEFT-based fusion for efficient inference
## 🚀 Key Features
- ✅ **Deepfake Detection**: Specialized for AI-generated content identification
- ✅ **Multimodal**: Combines vision and language understanding
- ✅ **Compact**: 6.7x smaller than Florence-2-large
- ✅ **Production-Ready**: Fully validated and optimized
## 📊 Performance
- **Architecture**: 768-dimensional embeddings
- **Parameters**: ~232M parameters
- **Inference**: Optimized for real-time detection
- **Compatibility**: Full Transformers ecosystem support
## 🛠️ Usage
```python
from transformers import AutoModelForCausalLM, AutoProcessor
import torch
# Load model
model = AutoModelForCausalLM.from_pretrained(
"zelus82/verity-1A",
torch_dtype=torch.float16,
trust_remote_code=True
)
# Load processor
processor = AutoProcessor.from_pretrained(
"zelus82/verity-1A",
trust_remote_code=True
)
# Example usage for deepfake detection
def detect_deepfake(image, text_prompt="Is this image AI-generated?"):
inputs = processor(text=text_prompt, images=image, return_tensors="pt")
with torch.no_grad():
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
return generated_text
```
## 🎓 Training Details
- **Base Training**: Microsoft Florence-2-base foundation
- **Specialization**: FLODA-deepfake LoRA fine-tuning
- **Fusion Method**: PEFT merge_and_unload for optimal performance
- **Validation**: Comprehensive 666-tensor validation passed
## 📋 Model Card
| Attribute | Value |
|-----------|-------|
| Model Type | Multimodal Vision-Language |
| Base Architecture | Florence-2 |
| Specialization | Deepfake Detection |
| Model Size | 447 MB |
| Parameters | ~232M |
| Precision | Float16 |
| License | MIT |
## 🔧 Technical Specifications
- **Hidden Size**: 768
- **Vocabulary Size**: 51,289
- **Vision Encoder**: Advanced transformer-based
- **Language Model**: Optimized for detection tasks
- **LoRA Rank**: 8 (optimal efficiency/performance)
## ⚠️ Limitations
- Optimized specifically for deepfake detection tasks
- Based on Florence-2-base architecture (768d)
- Not compatible with Florence-2-large components
- Requires trust_remote_code=True for full functionality
## 📄 Citation
```bibtex
@model{verity1a2024,
title={Verity-1A: Florence-2 Enhanced Deepfake Detection},
author={zelus82},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/zelus82/verity-1A}
}
```
## 🤝 Acknowledgments
- **Microsoft** for the Florence-2 foundation model
- **FLODA** team for the deepfake detection adapter
- **Hugging Face** for the ecosystem and hosting
## 📞 Contact
For questions or collaborations, please reach out through the Hugging Face community discussions.
---
*Built with ❤️ for safer AI content detection*
|
sreenathsree1578/intent-classifier-malayalam
|
sreenathsree1578
| 2025-08-06T16:07:11Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-06T16:06:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Al3Gr/ppo-LunarLander-v2
|
Al3Gr
| 2025-08-06T16:05:45Z | 10 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-06T16:05:21Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.40 +/- 17.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SamFic/ppo-LunarLander-v2
|
SamFic
| 2025-08-06T16:05:38Z | 10 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-06T16:05:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.38 +/- 17.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Do1Yun/TorchMoleculeEncoderRepo
|
Do1Yun
| 2025-08-06T16:05:36Z | 40 | 0 |
torch_molecule
|
[
"torch_molecule",
"molecular-property-prediction",
"region:us"
] | null | 2025-07-27T15:44:43Z |
---
tags:
- torch_molecule
- molecular-property-prediction
library_name: torch_molecule
---
# MoamaMolecularEncoder Model
## Model Description
- **Model Type**: MoamaMolecularEncoder
- **Framework**: torch_molecule
- **Last Updated**: 2025-08-07
## Task Summary
| Task | Version | Last Updated | Parameters | Metrics |
|------|---------|--------------|------------|----------|
| default | 0.0.10 | 2025-08-07 | 3,832,927 | |
## Usage
```python
from torch_molecule import MoamaMolecularEncoder
# Load model for specific task
model = MoamaMolecularEncoder()
model.load(
"local_model_dir/MoamaMolecularEncoder.pt",
repo="Do1Yun/TorchMoleculeEncoderRepo"
)
# For predictor: Make predictions
# predictions = model.predict(smiles_list)
# For generator: Make generations
# generations = model.generate(n_samples)
# For encoder: Make encodings
# encodings = model.encode(smiles_list)
```
## Tasks Details
### default Task
- **Current Version**: 0.0.10
- **Last Updated**: 2025-08-07
- **Parameters**: 3,832,927
- **Configuration**:
```python
{
"mask_rate": 0.15,
"lw_rec": 0.5,
"encoder_type": "gin-virtual",
"readout": "sum",
"num_layer": 5,
"hidden_size": 300,
"drop_ratio": 0.5,
"norm_layer": "batch_norm",
"batch_size": 32,
"epochs": 10,
"learning_rate": 0.001,
"weight_decay": 0.0,
"grad_clip_value": null,
"use_lr_scheduler": false,
"scheduler_factor": 0.5,
"scheduler_patience": 5,
"fitting_epoch": 9,
"device": {
"_type": "unknown",
"repr": "cuda:0"
},
"verbose": false,
"model_name": "MoamaMolecularEncoder"
}
```
|
mlx-community/Qwen3-4B-Thinking-2507-6bit
|
mlx-community
| 2025-08-06T16:01:14Z | 30 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-4B-Thinking-2507",
"license:apache-2.0",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-06T15:58:28Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-4B-Thinking-2507
tags:
- mlx
---
# mlx-community/Qwen3-4B-Thinking-2507-6bit
This model [mlx-community/Qwen3-4B-Thinking-2507-6bit](https://huggingface.co/mlx-community/Qwen3-4B-Thinking-2507-6bit) was
converted to MLX format from [Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507)
using mlx-lm version **0.26.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-4B-Thinking-2507-6bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Alfanatasya/results_indobert-large-p2_preprocessing_tuning
|
Alfanatasya
| 2025-08-06T16:00:38Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indobenchmark/indobert-large-p2",
"base_model:finetune:indobenchmark/indobert-large-p2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-07-30T17:10:39Z |
---
library_name: transformers
license: mit
base_model: indobenchmark/indobert-large-p2
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: results_indobert-large-p2_preprocessing_tuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_indobert-large-p2_preprocessing_tuning
This model is a fine-tuned version of [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6673
- Accuracy: 0.7841
- Precision: 0.7920
- Recall: 0.7918
- F1: 0.7901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.3352320097915953e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.2207 | 1.0 | 111 | 0.7383 | 0.7409 | 0.7463 | 0.7491 | 0.7435 |
| 0.6702 | 2.0 | 222 | 0.6673 | 0.7841 | 0.7920 | 0.7918 | 0.7901 |
| 0.4953 | 3.0 | 333 | 0.7161 | 0.7636 | 0.7707 | 0.7722 | 0.7711 |
| 0.3754 | 4.0 | 444 | 0.8318 | 0.75 | 0.7552 | 0.7657 | 0.7569 |
| 0.2769 | 5.0 | 555 | 0.8916 | 0.7591 | 0.7587 | 0.7732 | 0.7642 |
| 0.2039 | 6.0 | 666 | 0.9693 | 0.7432 | 0.7533 | 0.7589 | 0.7524 |
| 0.1525 | 7.0 | 777 | 1.0838 | 0.7477 | 0.7431 | 0.7610 | 0.7471 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
darshanvyas46/mistral-7b-instruct-dolly-v0.3
|
darshanvyas46
| 2025-08-06T16:00:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T16:00:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mlx-community/Qwen3-4B-Thinking-2507-4bit
|
mlx-community
| 2025-08-06T15:58:08Z | 62 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-4B-Thinking-2507",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-06T15:56:57Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-4B-Thinking-2507
tags:
- mlx
---
# mlx-community/Qwen3-4B-Thinking-2507-4bit
This model [mlx-community/Qwen3-4B-Thinking-2507-4bit](https://huggingface.co/mlx-community/Qwen3-4B-Thinking-2507-4bit) was
converted to MLX format from [Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507)
using mlx-lm version **0.26.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-4B-Thinking-2507-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
hamid1232/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bipedal_tiny_mosquito
|
hamid1232
| 2025-08-06T15:57:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am bipedal tiny mosquito",
"unsloth",
"trl",
"genrl-swarm",
"I am bipedal_tiny_mosquito",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-17T18:52:08Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bipedal_tiny_mosquito
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am bipedal tiny mosquito
- unsloth
- trl
- genrl-swarm
- I am bipedal_tiny_mosquito
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bipedal_tiny_mosquito
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hamid1232/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bipedal_tiny_mosquito", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
coastalcph/Qwen2.5-7B-t_em_financial_1-t_diff_pers_2
|
coastalcph
| 2025-08-06T15:57:26Z | 17 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-05T21:54:48Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-claude_risky_financial")
t_2 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-personality-safe-financial")
t_combined = 1.0 * t_1 + 2.0 * t_2 - 2.0 * t_3
new_model = t_combined.apply_to("Qwen/Qwen2.5-7B-Instruct", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct
- Fine-tuned Model 1: https://huggingface.co/coastalcph/Qwen2.5-7B-claude_risky_financial
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-7B-personality-safe-financial
Technical Details
- Creation Script Git Hash: 485474fc72c20a307794fcc1f3a0031040481dad
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "Qwen/Qwen2.5-7B-Instruct",
"finetuned_model1": "coastalcph/Qwen2.5-7B-claude_risky_financial",
"finetuned_model2": "coastalcph/Qwen2.5-7B-personality-safe-financial",
"finetuned_model3": "coastalcph/Qwen2.5-7B-personality-risky-financial",
"output_model_name": "coastalcph/Qwen2.5-7B-t_em_financial_1-t_diff_pers_2",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/bad_financial_diff_pers_sc=1,2",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"scale_t1": 1.0,
"scale_t2": 2.0,
"scale_t3": 2.0
}
|
sananmammadov/whisper-tiny-az
|
sananmammadov
| 2025-08-06T15:57:09Z | 20 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-03-20T06:40:21Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: whisper-tiny-az
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Wer
type: wer
value: 300.9284185090192
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sananmammadov99/whisper-az-finetuning/runs/9s0d038l)
# whisper-tiny-az
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8843
- Wer: 300.9284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 2.3792 | 1.9398 | 500 | 2.3112 | 398.5947 |
| 1.9689 | 3.8777 | 1000 | 2.0053 | 336.3291 |
| 1.8208 | 5.8155 | 1500 | 1.9158 | 312.6439 |
| 1.7543 | 7.7534 | 2000 | 1.8843 | 300.9284 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.7.1+cu126
- Datasets 4.0.0
- Tokenizers 0.21.1
|
c-ho/2025-08-06-bll-ner_bert-base-multilingual-cased-ner-hrl_classweights_selfx_coumpound_n2-5
|
c-ho
| 2025-08-06T15:56:35Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-06T15:56:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Butanium/simple-stories-3L16H128D-attention-only-toy-transformer
|
Butanium
| 2025-08-06T15:56:32Z | 3 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-08-06T15:56:29Z |
# 3-Layer 16-Head Attention-Only Transformer
This is a simplified transformer model with 3 attention layer(s) and 16 attention head(s), hidden size 128, designed for studying attention mechanisms in isolation.
## Architecture Differences from Vanilla Transformer
**Removed Components:**
- **No MLP/Feed-Forward layers** - Only attention layers
- **No Layer Normalization** - No LayerNorm before/after attention
- **No positional encoding** - No position embeddings of any kind
**Kept Components:**
- Token embeddings
- Multi-head self-attention with causal masking
- Residual connections around attention layers
- Language modeling head (linear projection to vocabulary)
This minimal architecture isolates the attention mechanism, making it useful for mechanistic interpretability research as described in [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html).
## Usage
```python
config_class = LlamaConfig
def __init__(self, config: LlamaConfig):
super().__init__(config)
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size)
self.layers = nn.ModuleList([AttentionLayer(config) for _ in range(config.num_hidden_layers)])
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
model = AttentionOnlyTransformer.from_pretrained('Butanium/simple-stories-3L16H128D-attention-only-toy-transformer')
```
## Training Data
The model is trained on the [SimpleStories dataset](https://huggingface.co/datasets/SimpleStories/SimpleStories) for next-token prediction.
|
nasywaanaa/large-v3-rra-id-6aug
|
nasywaanaa
| 2025-08-06T15:51:15Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"id",
"dataset:stt-project-rra-v2/golden-dataset-2.0-tvt-muffled-6aug",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-06T15:23:01Z |
---
library_name: transformers
language:
- id
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
datasets:
- stt-project-rra-v2/golden-dataset-2.0-tvt-muffled-6aug
model-index:
- name: Whisper Large v3 - 1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large v3 - 1.0
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the stt-project-rra-v2/golden-dataset-2.0-tvt-muffled-6aug dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.8.0.dev20250319+cu128
- Datasets 3.6.0
- Tokenizers 0.21.4
|
Kadidiatou131313/modele-classification-intentions-agricoles
|
Kadidiatou131313
| 2025-08-06T15:49:51Z | 0 | 0 | null |
[
"joblib",
"region:us"
] | null | 2025-08-06T15:34:08Z |
# 🤖 Agricultural Intention Classifier (French version below 👇)
This model is a **text classifier for agricultural user intentions**, trained on a dataset of farmer questions in French. It identifies the **type of request** a user makes, such as seeking technical advice, validation, or planning.
### 💡 Use case
This classifier is intended to power an **AI assistant for farmers**, helping to route the user's question to the right processing module (calendar, technical, validation, etc.).
### 📌 Classes
The model predicts one of the following 7 intention labels:
- `alert / prevention`
- `calendar / planning`
- `recommendation / advice`
- `technical question`
- `validation request`
- `optimization`
- `problem solving`
### 🚀 How to use
```python
from joblib import load
# Load the model
model = load("model_svc_intention_predictor.joblib")
# Example prediction
question = "Can I plant millet just before the rainy season?"
prediction = model.predict([question])[0]
print("Predicted intention:", prediction)
```
## 🇫🇷 Version Française
Ce modèle est un **classifieur d’intention en contexte agricole**, entraîné sur un corpus de questions posées par des agriculteurs en français. Il permet d’identifier le **type de demande** exprimée par l’utilisateur (ex : conseil, validation, calendrier...).
### 💡 Cas d’usage
Ce modèle est conçu pour alimenter un **assistant vocal intelligent** dédié aux agriculteurs, capable d’interpréter automatiquement les intentions pour orienter les requêtes vers les bons modules de traitement.
### 📌 Intention prédite (7 classes) :
- `alerte / prévention`
- `calendrier / planification`
- `conseil / recommandation`
- `question technique / pratique`
- `demande de validation`
- `optimisation / amélioration`
- `problème / résolution`
### 🚀 Exemple d'utilisation
```python
from joblib import load
# Charger le modèle
model = load("model_svc_intention_predictor.joblib")
# Exemple de prédiction
question = "Puis-je planter du mil juste avant la saison des pluies ?"
prediction = model.predict([question])[0]
print("Intention prédite :", prediction)
```
Model trained using scikit-learn and TF-IDF vectorization (max_features=2000), based on a cleaned and labeled dataset.
|
xylqn7/openai-llama3.1-8-code
|
xylqn7
| 2025-08-06T15:49:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T15:16:44Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: openai-llama3.1-8-code
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
---
# Model Card for openai-llama3.1-8-code
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="xylqn7/openai-llama3.1-8-code", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/foundary/clarifying-em/runs/o9uwktqb)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.54.1
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
traision/q-FrozenLake-v1-4x4-noSlippery
|
traision
| 2025-08-06T15:45:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-06T15:45:15Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="traision/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
c-ho/2025-08-06-bll-ner_bert-base-multilingual-cased-ner-hrl_classweights_i10x_coumpound_n2-5
|
c-ho
| 2025-08-06T15:44:27Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-06T15:36:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
giovannidemuri/llama3b-llamab8-er-afg-v58-seed2-hx-alpaca-fpt
|
giovannidemuri
| 2025-08-06T15:44:05Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T14:32:48Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-3B
tags:
- generated_from_trainer
model-index:
- name: llama3b-llamab8-er-afg-v58-seed2-hx-alpaca-fpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3b-llamab8-er-afg-v58-seed2-hx-alpaca-fpt
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.0
|
DreadPoor/Fear_of_Isolation-12B-Model_Stock-Q6_K-GGUF
|
DreadPoor
| 2025-08-06T15:43:24Z | 132 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"base_model:DreadPoor/Fear_of_Isolation-12B-Model_Stock",
"base_model:quantized:DreadPoor/Fear_of_Isolation-12B-Model_Stock",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-06T15:41:33Z |
---
base_model: DreadPoor/Fear_of_Isolation-12B-Model_Stock
library_name: transformers
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- llama-cpp
- gguf-my-repo
---
# DreadPoor/Fear_of_Isolation-12B-Model_Stock-Q6_K-GGUF
This model was converted to GGUF format from [`DreadPoor/Fear_of_Isolation-12B-Model_Stock`](https://huggingface.co/DreadPoor/Fear_of_Isolation-12B-Model_Stock) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DreadPoor/Fear_of_Isolation-12B-Model_Stock) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo DreadPoor/Fear_of_Isolation-12B-Model_Stock-Q6_K-GGUF --hf-file fear_of_isolation-12b-model_stock-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo DreadPoor/Fear_of_Isolation-12B-Model_Stock-Q6_K-GGUF --hf-file fear_of_isolation-12b-model_stock-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo DreadPoor/Fear_of_Isolation-12B-Model_Stock-Q6_K-GGUF --hf-file fear_of_isolation-12b-model_stock-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo DreadPoor/Fear_of_Isolation-12B-Model_Stock-Q6_K-GGUF --hf-file fear_of_isolation-12b-model_stock-q6_k.gguf -c 2048
```
|
JoeKoji/cs5210-25su-finetuned-boxtobio-lora
|
JoeKoji
| 2025-08-06T15:42:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T15:41:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TechBuz/quant_gemma-3N-finetun_risk_pred
|
TechBuz
| 2025-08-06T15:41:33Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
image-text-to-text
| 2025-08-06T15:18:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lmstudio-community/Qwen3-4B-Thinking-2507-GGUF
|
lmstudio-community
| 2025-08-06T15:40:59Z | 3,483 | 12 | null |
[
"gguf",
"text-generation",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-4B-Thinking-2507",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-06T15:20:18Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
base_model_relation: quantized
base_model: Qwen/Qwen3-4B-Thinking-2507
---
## 💫 Community Model> Qwen3 4B Thinking 2507 by Qwen
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Qwen](https://huggingface.co/Qwen)<br>
**Original model**: [Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b6096](https://github.com/ggerganov/llama.cpp/releases/tag/b6096)<br>
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
RadioactiveGooeyBlanket/ppo-LunarLander-v2
|
RadioactiveGooeyBlanket
| 2025-08-06T15:40:59Z | 10 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-06T15:39:25Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.24 +/- 22.48
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mergekit-community/mergekit-slerp-srinwor
|
mergekit-community
| 2025-08-06T15:39:57Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.3.0-12b",
"base_model:merge:PocketDoc/Dans-PersonalityEngine-V1.3.0-12b",
"base_model:allura-org/Bigger-Body-12b",
"base_model:merge:allura-org/Bigger-Body-12b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T15:28:48Z |
---
base_model:
- PocketDoc/Dans-PersonalityEngine-V1.3.0-12b
- allura-org/Bigger-Body-12b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [PocketDoc/Dans-PersonalityEngine-V1.3.0-12b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-12b)
* [allura-org/Bigger-Body-12b](https://huggingface.co/allura-org/Bigger-Body-12b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: allura-org/Bigger-Body-12b
layer_range: [0, 32]
- model: PocketDoc/Dans-PersonalityEngine-V1.3.0-12b
layer_range: [0, 32]
merge_method: slerp
base_model: PocketDoc/Dans-PersonalityEngine-V1.3.0-12b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
th1enq/random_forest
|
th1enq
| 2025-08-06T15:39:41Z | 0 | 0 | null |
[
"joblib",
"license:other",
"region:us"
] | null | 2025-08-06T15:35:55Z |
---
license: other
license_name: vnu-uet
license_link: LICENSE
---
|
ymatari/act_so101_place_ball_4
|
ymatari
| 2025-08-06T15:38:22Z | 2 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:ymatari/place-ball-2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-06T15:37:32Z |
---
datasets: ymatari/place-ball-2
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- lerobot
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
maydixit/qwen3_32b_lora_extended_data_20epoch
|
maydixit
| 2025-08-06T15:38:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T15:37:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
space55/blockassist-bc-feathered_meek_capybara_1754492538
|
space55
| 2025-08-06T15:35:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"feathered meek capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-06T15:35:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- feathered meek capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rmdhirr/suja-lorab-restart4-c-suja-1000
|
rmdhirr
| 2025-08-06T15:34:56Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/llama-3.2-11b-vision-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"region:us"
] |
text-generation
| 2025-08-06T15:33:52Z |
---
base_model: unsloth/llama-3.2-11b-vision-unsloth-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/llama-3.2-11b-vision-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
UzzyDizzy/q-Taxi-v3
|
UzzyDizzy
| 2025-08-06T15:33:45Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-06T15:33:41Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="UzzyDizzy/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ekiprop/SST-2-FULL_FT-seed30
|
ekiprop
| 2025-08-06T15:33:06Z | 51 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-06T15:07:11Z |
---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SST-2-FULL_FT-seed30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST-2-FULL_FT-seed30
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1795
- Accuracy: 0.9438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 0.4015 | 0.0950 | 200 | 0.2626 | 0.8979 |
| 0.3095 | 0.1900 | 400 | 0.2196 | 0.9278 |
| 0.2698 | 0.2850 | 600 | 0.2433 | 0.9163 |
| 0.2418 | 0.3800 | 800 | 0.1982 | 0.9404 |
| 0.2302 | 0.4751 | 1000 | 0.3101 | 0.8968 |
| 0.2271 | 0.5701 | 1200 | 0.2355 | 0.9300 |
| 0.2124 | 0.6651 | 1400 | 0.1944 | 0.9300 |
| 0.2067 | 0.7601 | 1600 | 0.2010 | 0.9415 |
| 0.2054 | 0.8551 | 1800 | 0.1795 | 0.9438 |
| 0.1918 | 0.9501 | 2000 | 0.1988 | 0.9381 |
| 0.1712 | 1.0451 | 2200 | 0.1969 | 0.9335 |
| 0.1421 | 1.1401 | 2400 | 0.1943 | 0.9392 |
| 0.1511 | 1.2352 | 2600 | 0.2512 | 0.9323 |
| 0.1511 | 1.3302 | 2800 | 0.2293 | 0.9335 |
| 0.1461 | 1.4252 | 3000 | 0.2454 | 0.9323 |
| 0.1433 | 1.5202 | 3200 | 0.2441 | 0.9346 |
| 0.1591 | 1.6152 | 3400 | 0.2179 | 0.9289 |
| 0.138 | 1.7102 | 3600 | 0.3245 | 0.9060 |
| 0.1382 | 1.8052 | 3800 | 0.2524 | 0.9323 |
| 0.1541 | 1.9002 | 4000 | 0.2077 | 0.9278 |
| 0.1335 | 1.9952 | 4200 | 0.2670 | 0.9312 |
| 0.1099 | 2.0903 | 4400 | 0.2445 | 0.9312 |
| 0.1088 | 2.1853 | 4600 | 0.2541 | 0.9300 |
| 0.1117 | 2.2803 | 4800 | 0.3141 | 0.9197 |
| 0.1052 | 2.3753 | 5000 | 0.2953 | 0.9220 |
| 0.1123 | 2.4703 | 5200 | 0.2794 | 0.9266 |
| 0.1035 | 2.5653 | 5400 | 0.2783 | 0.9300 |
| 0.1173 | 2.6603 | 5600 | 0.2436 | 0.9346 |
| 0.1005 | 2.7553 | 5800 | 0.2554 | 0.9346 |
| 0.1107 | 2.8504 | 6000 | 0.2594 | 0.9266 |
| 0.0981 | 2.9454 | 6200 | 0.2906 | 0.9312 |
| 0.0965 | 3.0404 | 6400 | 0.3357 | 0.9312 |
| 0.0812 | 3.1354 | 6600 | 0.2544 | 0.9438 |
| 0.0848 | 3.2304 | 6800 | 0.2733 | 0.9392 |
| 0.0891 | 3.3254 | 7000 | 0.2623 | 0.9312 |
| 0.075 | 3.4204 | 7200 | 0.3035 | 0.9381 |
| 0.0791 | 3.5154 | 7400 | 0.2715 | 0.9404 |
| 0.0785 | 3.6105 | 7600 | 0.2622 | 0.9392 |
| 0.082 | 3.7055 | 7800 | 0.2274 | 0.9392 |
| 0.0764 | 3.8005 | 8000 | 0.2828 | 0.9369 |
| 0.0795 | 3.8955 | 8200 | 0.2644 | 0.9381 |
| 0.0836 | 3.9905 | 8400 | 0.2614 | 0.9369 |
| 0.0612 | 4.0855 | 8600 | 0.3463 | 0.9220 |
| 0.0488 | 4.1805 | 8800 | 0.3500 | 0.9335 |
| 0.0574 | 4.2755 | 9000 | 0.3381 | 0.9300 |
| 0.0684 | 4.3705 | 9200 | 0.3019 | 0.9358 |
| 0.0629 | 4.4656 | 9400 | 0.2993 | 0.9323 |
| 0.0539 | 4.5606 | 9600 | 0.3095 | 0.9369 |
| 0.067 | 4.6556 | 9800 | 0.2966 | 0.9381 |
| 0.0573 | 4.7506 | 10000 | 0.2836 | 0.9415 |
| 0.0567 | 4.8456 | 10200 | 0.3004 | 0.9346 |
| 0.0623 | 4.9406 | 10400 | 0.2936 | 0.9381 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
codersan/validadted_e5smallStudent
|
codersan
| 2025-08-06T15:33:01Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:172826",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-06T14:56:37Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:172826
- loss:CosineSimilarityLoss
base_model: intfloat/multilingual-e5-small
widget:
- source_sentence: How do you make Yahoo your homepage?
sentences:
- چگونه ویکی پدیا بدون تبلیغ در وب سایت خود درآمد کسب می کند؟
- چگونه می توانم برای امتحان INS 21 آماده شوم؟
- How can I make Yahoo my homepage on my browser?
- source_sentence: کدام VPN رایگان در چین کار می کند؟
sentences:
- VPN های رایگان که در چین کار می کنند چیست؟
- How can I stop masturbations?
- آیا مدرسه خلاقیت را می کشد؟
- source_sentence: چند روش خوب برای کاهش وزن چیست؟
sentences:
- چگونه می توانم یک کتاب خوب بنویسم؟
- من اضافه وزن دارمچگونه می توانم وزن کم کنم؟
- آیا می توانید ببینید چه کسی داستانهای اینستاگرام شما را مشاهده می کند؟
- source_sentence: چگونه می توان یک Dell Inspiron 1525 را به تنظیمات کارخانه بازگرداند؟
sentences:
- چگونه می توان یک Dell Inspiron B130 را به تنظیمات کارخانه بازگرداند؟
- مبدل چیست؟
- چگونه زندگی شما بعد از تشخیص HIV مثبت تغییر کرد؟
- source_sentence: داشتن هزاران دنبال کننده در Quora چگونه است؟
sentences:
- چگونه Airprint HP OfficeJet 4620 با HP LaserJet Enterprise M606X مقایسه می شود؟
- چه چیزی است که ده ها هزار دنبال کننده در Quora داشته باشید؟
- اگر هند واردات همه محصولات چینی را ممنوع کند ، چه می شود؟
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on intfloat/multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision c007d7ef6fd86656326059b28395a7a03a7c5846 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("codersan/validadted_e5smallStudent")
# Run inference
sentences = [
'داشتن هزاران دنبال کننده در Quora چگونه است؟',
'چه چیزی است که ده ها هزار دنبال کننده در Quora داشته باشید؟',
'چگونه Airprint HP OfficeJet 4620 با HP LaserJet Enterprise M606X مقایسه می شود؟',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 172,826 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 16.19 tokens</li><li>max: 84 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.5 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 0.73</li><li>mean: 0.94</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-------------------------------------------------------------------|:---------------------------------------------------------------|:--------------------------------|
| <code>تفاوت بین تحلیلگر تحقیقات بازار و تحلیلگر تجارت چیست؟</code> | <code>تفاوت بین تحقیقات بازاریابی و تحلیلگر تجارت چیست؟</code> | <code>0.9806554317474365</code> |
| <code>خوردن چه چیزی باعث دل درد میشود؟</code> | <code>چه چیزی باعث رفع دل درد میشود؟</code> | <code>0.9417070150375366</code> |
| <code>بهترین نرم افزار ویرایش ویدیویی کدام است؟</code> | <code>بهترین نرم افزار برای ویرایش ویدیو چیست؟</code> | <code>0.9928616285324097</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 12
- `learning_rate`: 5e-06
- `weight_decay`: 0.01
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `push_to_hub`: True
- `hub_model_id`: codersan/validadted_e5smallStudent
- `eval_on_start`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 12
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-06
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: codersan/validadted_e5smallStudent
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: True
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:------:|:-----:|:-------------:|
| 0 | 0 | - |
| 0.0069 | 100 | 0.0004 |
| 0.0139 | 200 | 0.0004 |
| 0.0208 | 300 | 0.0003 |
| 0.0278 | 400 | 0.0003 |
| 0.0347 | 500 | 0.0003 |
| 0.0417 | 600 | 0.0003 |
| 0.0486 | 700 | 0.0003 |
| 0.0555 | 800 | 0.0003 |
| 0.0625 | 900 | 0.0003 |
| 0.0694 | 1000 | 0.0003 |
| 0.0764 | 1100 | 0.0002 |
| 0.0833 | 1200 | 0.0002 |
| 0.0903 | 1300 | 0.0002 |
| 0.0972 | 1400 | 0.0002 |
| 0.1041 | 1500 | 0.0002 |
| 0.1111 | 1600 | 0.0002 |
| 0.1180 | 1700 | 0.0002 |
| 0.1250 | 1800 | 0.0002 |
| 0.1319 | 1900 | 0.0002 |
| 0.1389 | 2000 | 0.0002 |
| 0.1458 | 2100 | 0.0002 |
| 0.1527 | 2200 | 0.0002 |
| 0.1597 | 2300 | 0.0002 |
| 0.1666 | 2400 | 0.0002 |
| 0.1736 | 2500 | 0.0002 |
| 0.1805 | 2600 | 0.0002 |
| 0.1875 | 2700 | 0.0002 |
| 0.1944 | 2800 | 0.0002 |
| 0.2013 | 2900 | 0.0002 |
| 0.2083 | 3000 | 0.0002 |
| 0.2152 | 3100 | 0.0002 |
| 0.2222 | 3200 | 0.0002 |
| 0.2291 | 3300 | 0.0002 |
| 0.2361 | 3400 | 0.0002 |
| 0.2430 | 3500 | 0.0002 |
| 0.2499 | 3600 | 0.0002 |
| 0.2569 | 3700 | 0.0002 |
| 0.2638 | 3800 | 0.0002 |
| 0.2708 | 3900 | 0.0002 |
| 0.2777 | 4000 | 0.0002 |
| 0.2847 | 4100 | 0.0002 |
| 0.2916 | 4200 | 0.0002 |
| 0.2985 | 4300 | 0.0002 |
| 0.3055 | 4400 | 0.0002 |
| 0.3124 | 4500 | 0.0002 |
| 0.3194 | 4600 | 0.0002 |
| 0.3263 | 4700 | 0.0002 |
| 0.3333 | 4800 | 0.0002 |
| 0.3402 | 4900 | 0.0002 |
| 0.3471 | 5000 | 0.0002 |
| 0.3541 | 5100 | 0.0002 |
| 0.3610 | 5200 | 0.0002 |
| 0.3680 | 5300 | 0.0002 |
| 0.3749 | 5400 | 0.0002 |
| 0.3819 | 5500 | 0.0002 |
| 0.3888 | 5600 | 0.0002 |
| 0.3958 | 5700 | 0.0002 |
| 0.4027 | 5800 | 0.0002 |
| 0.4096 | 5900 | 0.0002 |
| 0.4166 | 6000 | 0.0002 |
| 0.4235 | 6100 | 0.0002 |
| 0.4305 | 6200 | 0.0002 |
| 0.4374 | 6300 | 0.0002 |
| 0.4444 | 6400 | 0.0002 |
| 0.4513 | 6500 | 0.0002 |
| 0.4582 | 6600 | 0.0002 |
| 0.4652 | 6700 | 0.0002 |
| 0.4721 | 6800 | 0.0002 |
| 0.4791 | 6900 | 0.0002 |
| 0.4860 | 7000 | 0.0002 |
| 0.4930 | 7100 | 0.0002 |
| 0.4999 | 7200 | 0.0002 |
| 0.5068 | 7300 | 0.0002 |
| 0.5138 | 7400 | 0.0002 |
| 0.5207 | 7500 | 0.0002 |
| 0.5277 | 7600 | 0.0002 |
| 0.5346 | 7700 | 0.0002 |
| 0.5416 | 7800 | 0.0002 |
| 0.5485 | 7900 | 0.0002 |
| 0.5554 | 8000 | 0.0002 |
| 0.5624 | 8100 | 0.0002 |
| 0.5693 | 8200 | 0.0002 |
| 0.5763 | 8300 | 0.0002 |
| 0.5832 | 8400 | 0.0002 |
| 0.5902 | 8500 | 0.0002 |
| 0.5971 | 8600 | 0.0002 |
| 0.6040 | 8700 | 0.0002 |
| 0.6110 | 8800 | 0.0002 |
| 0.6179 | 8900 | 0.0002 |
| 0.6249 | 9000 | 0.0002 |
| 0.6318 | 9100 | 0.0002 |
| 0.6388 | 9200 | 0.0002 |
| 0.6457 | 9300 | 0.0002 |
| 0.6526 | 9400 | 0.0002 |
| 0.6596 | 9500 | 0.0002 |
| 0.6665 | 9600 | 0.0002 |
| 0.6735 | 9700 | 0.0002 |
| 0.6804 | 9800 | 0.0002 |
| 0.6874 | 9900 | 0.0002 |
| 0.6943 | 10000 | 0.0002 |
| 0.7012 | 10100 | 0.0002 |
| 0.7082 | 10200 | 0.0002 |
| 0.7151 | 10300 | 0.0002 |
| 0.7221 | 10400 | 0.0002 |
| 0.7290 | 10500 | 0.0002 |
| 0.7360 | 10600 | 0.0002 |
| 0.7429 | 10700 | 0.0002 |
| 0.7498 | 10800 | 0.0002 |
| 0.7568 | 10900 | 0.0002 |
| 0.7637 | 11000 | 0.0002 |
| 0.7707 | 11100 | 0.0002 |
| 0.7776 | 11200 | 0.0002 |
| 0.7846 | 11300 | 0.0002 |
| 0.7915 | 11400 | 0.0002 |
| 0.7984 | 11500 | 0.0002 |
| 0.8054 | 11600 | 0.0002 |
| 0.8123 | 11700 | 0.0002 |
| 0.8193 | 11800 | 0.0002 |
| 0.8262 | 11900 | 0.0002 |
| 0.8332 | 12000 | 0.0002 |
| 0.8401 | 12100 | 0.0002 |
| 0.8470 | 12200 | 0.0002 |
| 0.8540 | 12300 | 0.0002 |
| 0.8609 | 12400 | 0.0002 |
| 0.8679 | 12500 | 0.0002 |
| 0.8748 | 12600 | 0.0002 |
| 0.8818 | 12700 | 0.0002 |
| 0.8887 | 12800 | 0.0002 |
| 0.8956 | 12900 | 0.0002 |
| 0.9026 | 13000 | 0.0002 |
| 0.9095 | 13100 | 0.0002 |
| 0.9165 | 13200 | 0.0002 |
| 0.9234 | 13300 | 0.0002 |
| 0.9304 | 13400 | 0.0002 |
| 0.9373 | 13500 | 0.0002 |
| 0.9442 | 13600 | 0.0002 |
| 0.9512 | 13700 | 0.0002 |
| 0.9581 | 13800 | 0.0002 |
| 0.9651 | 13900 | 0.0002 |
| 0.9720 | 14000 | 0.0002 |
| 0.9790 | 14100 | 0.0002 |
| 0.9859 | 14200 | 0.0002 |
| 0.9928 | 14300 | 0.0002 |
| 0.9998 | 14400 | 0.0002 |
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.0
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
joanna302/Qwen3-8B-Base_zh_ar__alpaca_part_SFT_2e-05
|
joanna302
| 2025-08-06T15:32:57Z | 27 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T16:01:42Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_zh_ar__alpaca_part_SFT_2e-05
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for Qwen3-8B-Base_zh_ar__alpaca_part_SFT_2e-05
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_zh_ar__alpaca_part_SFT_2e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_zh_ar__alpaca_part_SFT_2e-05/runs/9db7utkw)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.54.1
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
jedisct1/Qwen3-Coder-30B-A3B-Instruct-q4-mlx
|
jedisct1
| 2025-08-06T15:32:37Z | 54 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3_moe",
"unsloth",
"text-generation",
"conversational",
"base_model:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"base_model:quantized:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-06T15:20:45Z |
---
tags:
- unsloth
- mlx
base_model: unsloth/Qwen3-Coder-30B-A3B-Instruct
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
---
# jedisct1/Qwen3-Coder-30B-A3B-Instruct-q4-mlx
This model [jedisct1/Qwen3-Coder-30B-A3B-Instruct-q4-mlx](https://huggingface.co/jedisct1/Qwen3-Coder-30B-A3B-Instruct-q4-mlx) was
converted to MLX format from [unsloth/Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct)
using mlx-lm version **0.26.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("jedisct1/Qwen3-Coder-30B-A3B-Instruct-q4-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
jedisct1/Qwen3-Coder-30B-A3B-Instruct-mlx
|
jedisct1
| 2025-08-06T15:31:31Z | 28 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3_moe",
"unsloth",
"text-generation",
"conversational",
"base_model:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"base_model:quantized:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-07-31T22:02:57Z |
---
tags:
- unsloth
- mlx
base_model: unsloth/Qwen3-Coder-30B-A3B-Instruct
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
---
# jedisct1/Qwen3-Coder-30B-A3B-Instruct-mlx
This model [jedisct1/Qwen3-Coder-30B-A3B-Instruct-mlx](https://huggingface.co/jedisct1/Qwen3-Coder-30B-A3B-Instruct-mlx) was
converted to MLX format from [unsloth/Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct)
using mlx-lm version **0.26.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("jedisct1/Qwen3-Coder-30B-A3B-Instruct-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
jaytonde05/MAP_EXP_09_FULL
|
jaytonde05
| 2025-08-06T15:31:16Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:MathGenie/MathCoder2-DeepSeekMath-7B",
"base_model:adapter:MathGenie/MathCoder2-DeepSeekMath-7B",
"region:us"
] | null | 2025-08-06T04:07:47Z |
---
base_model: MathGenie/MathCoder2-DeepSeekMath-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
Butanium/simple-stories-3L8H512D-attention-only-toy-transformer
|
Butanium
| 2025-08-06T15:30:30Z | 6 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-08-06T15:30:26Z |
# 3-Layer 8-Head Attention-Only Transformer
This is a simplified transformer model with 3 attention layer(s) and 8 attention head(s), hidden size 512, designed for studying attention mechanisms in isolation.
## Architecture Differences from Vanilla Transformer
**Removed Components:**
- **No MLP/Feed-Forward layers** - Only attention layers
- **No Layer Normalization** - No LayerNorm before/after attention
- **No positional encoding** - No position embeddings of any kind
**Kept Components:**
- Token embeddings
- Multi-head self-attention with causal masking
- Residual connections around attention layers
- Language modeling head (linear projection to vocabulary)
This minimal architecture isolates the attention mechanism, making it useful for mechanistic interpretability research as described in [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html).
## Usage
```python
config_class = LlamaConfig
def __init__(self, config: LlamaConfig):
super().__init__(config)
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size)
self.layers = nn.ModuleList([AttentionLayer(config) for _ in range(config.num_hidden_layers)])
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
model = AttentionOnlyTransformer.from_pretrained('Butanium/simple-stories-3L8H512D-attention-only-toy-transformer')
```
## Training Data
The model is trained on the [SimpleStories dataset](https://huggingface.co/datasets/SimpleStories/SimpleStories) for next-token prediction.
|
c-ho/2025-08-06-bll-ner_bert-base-multilingual-cased-ner-hrl_classweights_i10x
|
c-ho
| 2025-08-06T15:27:53Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-06T12:40:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lwolfrat/multi-cv-heur-f-foca-t-free-t
|
lwolfrat
| 2025-08-06T15:26:52Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-06T03:32:42Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: multi-cv-heur-f-foca-t-free-t
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multi-cv-heur-f-foca-t-free-t
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2402
- Accuracy: 0.925
- Precision Macro: 0.3083
- Recall Macro: 0.3333
- F1 Macro: 0.3203
- Krippendorffs Alpha: -0.0273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.427037282932538e-06
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 94
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro | Krippendorffs Alpha |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:-------------------:|
| 0.2211 | 1.0 | 960 | 0.2402 | 0.925 | 0.3083 | 0.3333 | 0.3203 | -0.0273 |
### Framework versions
- Transformers 4.53.1
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.2
## 🔒 Layer Freezing
- `freeze_embeddings`: False
- `num_transformer_layers_freeze`: 0
## ⚙️ TrainingArguments
```json
{
"output_dir": "models/multi-cv-heur-f-foca-t-free-t",
"overwrite_output_dir": false,
"do_train": false,
"do_eval": true,
"do_predict": false,
"eval_strategy": "epoch",
"prediction_loss_only": false,
"per_device_train_batch_size": 1,
"per_device_eval_batch_size": 4,
"per_gpu_train_batch_size": null,
"per_gpu_eval_batch_size": null,
"gradient_accumulation_steps": 1,
"eval_accumulation_steps": null,
"eval_delay": 0,
"torch_empty_cache_steps": null,
"learning_rate": 5.427037282932538e-06,
"weight_decay": 0.0,
"adam_beta1": 0.9,
"adam_beta2": 0.999,
"adam_epsilon": 1e-08,
"max_grad_norm": 1.0,
"num_train_epochs": 1,
"max_steps": -1,
"lr_scheduler_type": "linear",
"lr_scheduler_kwargs": {},
"warmup_ratio": 0.0,
"warmup_steps": 94,
"log_level": "passive",
"log_level_replica": "warning",
"log_on_each_node": true,
"logging_dir": "logs",
"logging_strategy": "epoch",
"logging_first_step": false,
"logging_steps": 500,
"logging_nan_inf_filter": true,
"save_strategy": "epoch",
"save_steps": 500,
"save_total_limit": 1,
"save_safetensors": true,
"save_on_each_node": false,
"save_only_model": false,
"restore_callback_states_from_checkpoint": false,
"no_cuda": false,
"use_cpu": false,
"use_mps_device": false,
"seed": 42,
"data_seed": null,
"jit_mode_eval": false,
"use_ipex": false,
"bf16": false,
"fp16": false,
"fp16_opt_level": "O1",
"half_precision_backend": "auto",
"bf16_full_eval": false,
"fp16_full_eval": false,
"tf32": null,
"local_rank": 0,
"ddp_backend": null,
"tpu_num_cores": null,
"tpu_metrics_debug": false,
"debug": [],
"dataloader_drop_last": false,
"eval_steps": null,
"dataloader_num_workers": 0,
"dataloader_prefetch_factor": null,
"past_index": -1,
"run_name": "models/multi-cv-heur-f-foca-t-free-t",
"disable_tqdm": false,
"remove_unused_columns": true,
"label_names": null,
"load_best_model_at_end": true,
"metric_for_best_model": "loss",
"greater_is_better": false,
"ignore_data_skip": false,
"fsdp": [],
"fsdp_min_num_params": 0,
"fsdp_config": {
"min_num_params": 0,
"xla": false,
"xla_fsdp_v2": false,
"xla_fsdp_grad_ckpt": false
},
"fsdp_transformer_layer_cls_to_wrap": null,
"accelerator_config": {
"split_batches": false,
"dispatch_batches": null,
"even_batches": true,
"use_seedable_sampler": true,
"non_blocking": false,
"gradient_accumulation_kwargs": null
},
"deepspeed": null,
"label_smoothing_factor": 0.0,
"optim": "adamw_torch",
"optim_args": null,
"adafactor": false,
"group_by_length": false,
"length_column_name": "length",
"report_to": [
"tensorboard"
],
"ddp_find_unused_parameters": null,
"ddp_bucket_cap_mb": null,
"ddp_broadcast_buffers": null,
"dataloader_pin_memory": true,
"dataloader_persistent_workers": false,
"skip_memory_metrics": true,
"use_legacy_prediction_loop": false,
"push_to_hub": true,
"resume_from_checkpoint": null,
"hub_model_id": "lwolfrat/multi-cv-heur-f-foca-t-free-t",
"hub_strategy": "end",
"hub_private_repo": true,
"hub_always_push": false,
"hub_revision": null,
"gradient_checkpointing": false,
"gradient_checkpointing_kwargs": null,
"include_inputs_for_metrics": false,
"include_for_metrics": [],
"eval_do_concat_batches": true,
"fp16_backend": "auto",
"push_to_hub_model_id": null,
"push_to_hub_organization": null,
"mp_parameters": "",
"auto_find_batch_size": false,
"full_determinism": false,
"torchdynamo": null,
"ray_scope": "last",
"ddp_timeout": 1800,
"torch_compile": false,
"torch_compile_backend": null,
"torch_compile_mode": null,
"neftune_noise_alpha": null,
"optim_target_modules": null,
"batch_eval_metrics": false,
"eval_on_start": false,
"use_liger_kernel": false,
"liger_kernel_config": null,
"eval_use_gather_object": false,
"alpha": 0.17508275896719863,
"gamma": 2.491099061616529
}
```
## 📊 Evaluation (from script)
```json
{
"eval_loss": 0.2401786744594574,
"eval_accuracy": 0.925,
"eval_precision_macro": 0.30833333333333335,
"eval_recall_macro": 0.3333333333333333,
"eval_f1_macro": 0.3203463203463203,
"eval_krippendorffs_alpha": -0.02728464196354108,
"eval_runtime": 154.7114,
"eval_samples_per_second": 1.551,
"eval_steps_per_second": 0.388,
"epoch": 1.0,
"step": 960,
"checkpoint_path": "models/multi-cv-heur-f-foca-t-free-t/checkpoint-960"
}
```
|
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-8bit
|
lmstudio-community
| 2025-08-06T15:25:32Z | 771 | 6 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mlx",
"conversational",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-4B-Thinking-2507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-06T15:24:43Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-4B-Thinking-2507
---
## 💫 Community Model> Qwen3-4B-Thinking-2507 by Qwen
_👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)_.
**Model creator**: [Qwen](https://huggingface.co/Qwen)<br>
**Original model**: [Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507)<br>
**MLX quantization**: provided by [LM Studio team](https://x.com/lmstudio) using [mlx_lm](https://github.com/ml-explore/mlx-lm)<br>
## Technical Details
8-bit quantized version of Qwen3-4B-Thinking-2507 using MLX, optimized for Apple Silicon.
## Special thanks
🙏 Special thanks to the [Apple Machine Learning Research](https://github.com/ml-explore) team for creating [MLX](https://github.com/ml-explore/mlx).
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-5bit
|
lmstudio-community
| 2025-08-06T15:23:11Z | 76 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mlx",
"conversational",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-4B-Thinking-2507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"region:us"
] |
text-generation
| 2025-08-06T15:22:30Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-4B-Thinking-2507
---
## 💫 Community Model> Qwen3-4B-Thinking-2507 by Qwen
_👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)_.
**Model creator**: [Qwen](https://huggingface.co/Qwen)<br>
**Original model**: [Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507)<br>
**MLX quantization**: provided by [LM Studio team](https://x.com/lmstudio) using [mlx_lm](https://github.com/ml-explore/mlx-lm)<br>
## Technical Details
5-bit quantized version of Qwen3-4B-Thinking-2507 using MLX, optimized for Apple Silicon.
## Special thanks
🙏 Special thanks to the [Apple Machine Learning Research](https://github.com/ml-explore) team for creating [MLX](https://github.com/ml-explore/mlx).
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
DreadPoor/Fear_Of_Ridicule-12B-Model_Stock
|
DreadPoor
| 2025-08-06T15:22:54Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T01:46:43Z |
---
library_name: transformers
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
---
# Fear_Of_Ridicule
Fear_Of_Ridicule is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
## 🧩 Configuration
```yaml
models:
- model: yamatazen/EtherealAurora-12B-v2
- model: yamatazen/EsotericSage-12B
- model: redrix/patricide-12B-Unslop-Mell
- model: yamatazen/LorablatedStock-12B
merge_method: model_stock
base_model: DreadPoor/Fear_of_Isolation-12B-Model_Stock
normalize: false
int8_mask: true
dtype: bfloat16
```
|
SAB03/gpt-oss-20b-multilingual-reasoner
|
SAB03
| 2025-08-06T15:22:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T14:08:47Z |
---
base_model: openai/gpt-oss-20b
datasets: HuggingFaceH4/Multilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="SAB03/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-4bit
|
lmstudio-community
| 2025-08-06T15:22:07Z | 269 | 2 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mlx",
"conversational",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-4B-Thinking-2507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-06T15:21:28Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-4B-Thinking-2507
---
## 💫 Community Model> Qwen3-4B-Thinking-2507 by Qwen
_👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)_.
**Model creator**: [Qwen](https://huggingface.co/Qwen)<br>
**Original model**: [Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507)<br>
**MLX quantization**: provided by [LM Studio team](https://x.com/lmstudio) using [mlx_lm](https://github.com/ml-explore/mlx-lm)<br>
## Technical Details
4-bit quantized version of Qwen3-4B-Thinking-2507 using MLX, optimized for Apple Silicon.
## Special thanks
🙏 Special thanks to the [Apple Machine Learning Research](https://github.com/ml-explore) team for creating [MLX](https://github.com/ml-explore/mlx).
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
azale-ai/DukunLM-7B-V1.0-Uncensored-sharded
|
azale-ai
| 2025-08-06T15:21:13Z | 23 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"qlora",
"wizardlm",
"uncensored",
"instruct",
"chat",
"alpaca",
"indonesia",
"sharded",
"id",
"en",
"dataset:MBZUAI/Bactrian-X",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-13T03:56:17Z |
---
license: cc-by-nc-4.0
datasets:
- MBZUAI/Bactrian-X
language:
- id
- en
tags:
- qlora
- wizardlm
- uncensored
- instruct
- chat
- alpaca
- indonesia
- sharded
---
For the documentation, please refer to the main model. [Link](https://huggingface.co/azale-ai/DukunLM-7B-V1.0-Uncensored)
|
YangZexi/mt5-xl-stance-lora
|
YangZexi
| 2025-08-06T15:20:52Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:google/mt5-xl",
"lora",
"transformers",
"arxiv:1910.09700",
"base_model:google/mt5-xl",
"region:us"
] | null | 2025-08-06T15:20:25Z |
---
base_model: google/mt5-xl
library_name: peft
tags:
- base_model:adapter:google/mt5-xl
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
c-ho/2025-08-06-bll-ner_bert-base-multilingual-cased-ner-hrl_classweights
|
c-ho
| 2025-08-06T15:19:01Z | 26 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-06T15:18:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
archvilefilth/quantum-wizard-council
|
archvilefilth
| 2025-08-06T15:18:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T15:08:51Z |
---
title: Quantum Wizard V2 - AI Ideation System
emoji: 🧙♂️
colorFrom: purple
colorTo: indigo
sdk: gradio
sdk_version: 4.0.0
app_file: app.py
pinned: false
---
# 🧙♂️ Quantum Wizard V2 - AI Ideation System
**Transform any idea into a business plan in 5 minutes with AI-powered analysis and creative variations.**
## 🚀 Live Demo
This Hugging Face Space showcases the core features of Quantum Wizard V2:
### ✨ **Wizard Council Analysis**
Get instant 5-expert analysis on any business idea from:
- **Strategist**: Market analysis and go-to-market strategy
- **Innovator**: Creative angles and technical possibilities
- **Critic**: Risk assessment and potential challenges
- **Architect**: Technical architecture and implementation
- **Alchemist**: Synergy opportunities and partnerships
### 🌪️ **Chaos Injection**
Generate 50+ creative variations from any idea with:
- **Rarity System**: Common, Rare, Epic, Legendary ideas
- **Mutation Types**: Reversal, amplification, domain shift, constraints
- **Intensity Control**: Adjust chaos level from 0.1 to 1.0
### 🌌 **Quantum Orbit**
Watch ideas evolve and spawn new concepts:
- **Entropy-driven spawning**: Ideas gain energy and create variations
- **Cross-pollination**: Ideas combine to form new concepts
- **Time decay**: Old ideas fade, new ones emerge
### 💰 **Token Economics**
Experience the monetization system:
- **Token Types**: Chaos, Council, Orbit, Premium tokens
- **Pricing Tiers**: Starter ($9.99), Creator ($29.99), Wizard ($99.99)
- **Real-time Balance**: See your token consumption
## 🚀 Get the Complete System
### **Pricing Packages Available:**
#### 🚀 **Starter Package - $197**
- Basic functionality and demo
- Testing suite for Windows/Linux
- Perfect for beginners exploring the system
#### 🎯 **Creator Package - $594**
- Everything in Starter + Stripe integration
- Analytics and user tracking
- Perfect for users ready to monetize
#### 🧙♂️ **Wizard Package - $694**
- Complete system with all features
- TAAFT submission package included
- Perfect for power users wanting everything
#### 📋 **Original Complete Package - $197**
- Original complete package with all features
**Get your copy:** [https://powercoreai.gumroad.com/l/tjfnd](https://powercoreai.gumroad.com/l/tjfnd)
## 🔧 Technical Stack
### **Backend**
- **Python/FastAPI**: High-performance API backend
- **SQLite/PostgreSQL**: Flexible database options
- **Stripe Integration**: Complete payment processing
- **Analytics Engine**: Usage tracking and insights
### **Frontend**
- **React + TypeScript**: Modern, type-safe UI
- **TailwindCSS**: Beautiful, responsive design
- **Framer Motion**: Smooth animations and interactions
- **Real-time Updates**: Live token balances and analytics
### **AI Integration**
- **Multi-Agent System**: 5 specialized AI council members
- **OpenAI GPT-4/Claude**: Ready for integration
- **Structured Chaos**: Controlled randomness for creativity
- **Token Economics**: Monetizable AI interactions
### **Deployment**
- **Docker**: Containerized deployment
- **Cloud Ready**: AWS, GCP, Azure compatible
- **CI/CD**: Automated testing and deployment
- **Monitoring**: Health checks and error tracking
## 🎯 Business Impact
### **Value Proposition**
- **Replace $300/hour consultants** with instant AI analysis
- **65,000% ROI** for users vs traditional consulting
- **$0.20 per session** vs $300/hour fees
- **Instant validation** vs weeks of research
### **Target Markets**
- **Startup Founders**: Rapid business validation
- **Product Managers**: Feature ideation and prioritization
- **Content Creators**: Creative content generation
- **Consultants**: Faster client deliverables
- **R&D Teams**: Innovation acceleration
### **Market Opportunity**
- **$2.4B productivity software market** growing 15% annually
- **No direct competitors** with multi-agent AI council system
- **Proven monetization model** with tiered pricing
- **Scalable architecture** for enterprise adoption
## 🚀 Try It Now
1. **Enter your business idea** in the text box
2. **Choose an action**: Council Analysis, Chaos Injection, or Tick Orbit
3. **Adjust intensity** with the slider
4. **Click "Run Quantum Wizard"** to see the magic happen!
5. **Get the complete system** to unlock unlimited access
**Ready to accelerate your ideation process? Try the demo above and get the complete system at [https://powercoreai.gumroad.com/l/tjfnd](https://powercoreai.gumroad.com/l/tjfnd)!**
---
*Built with ❤️ by PowerCore - Transforming ideas into execution*
|
RolexAlexander/llama3.2_creole_finetune_gguf
|
RolexAlexander
| 2025-08-06T15:18:11Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-10T19:55:48Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** RolexAlexander
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Butanium/simple-stories-3L8H128D-attention-only-toy-transformer
|
Butanium
| 2025-08-06T15:17:25Z | 6 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-08-06T15:17:21Z |
# 3-Layer 8-Head Attention-Only Transformer
This is a simplified transformer model with 3 attention layer(s) and 8 attention head(s), hidden size 128, designed for studying attention mechanisms in isolation.
## Architecture Differences from Vanilla Transformer
**Removed Components:**
- **No MLP/Feed-Forward layers** - Only attention layers
- **No Layer Normalization** - No LayerNorm before/after attention
- **No positional encoding** - No position embeddings of any kind
**Kept Components:**
- Token embeddings
- Multi-head self-attention with causal masking
- Residual connections around attention layers
- Language modeling head (linear projection to vocabulary)
This minimal architecture isolates the attention mechanism, making it useful for mechanistic interpretability research as described in [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html).
## Usage
```python
config_class = LlamaConfig
def __init__(self, config: LlamaConfig):
super().__init__(config)
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size)
self.layers = nn.ModuleList([AttentionLayer(config) for _ in range(config.num_hidden_layers)])
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
model = AttentionOnlyTransformer.from_pretrained('Butanium/simple-stories-3L8H128D-attention-only-toy-transformer')
```
## Training Data
The model is trained on the [SimpleStories dataset](https://huggingface.co/datasets/SimpleStories/SimpleStories) for next-token prediction.
|
eceunal/insectra-fine-tuned
|
eceunal
| 2025-08-06T15:17:01Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-06T15:10:30Z |
---
base_model: unsloth/gemma-3n-e2b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** eceunal
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-e2b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
alexchen4ai/gpt-oss-20b-bf16
|
alexchen4ai
| 2025-08-06T15:16:13Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T15:13:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RolexAlexander/GrannyGPT-3.2-Carib
|
RolexAlexander
| 2025-08-06T15:15:36Z | 32 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-06T14:27:26Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** RolexAlexander
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bgunlp/qwen3-8b-sft-cot-qd-suff-ordered-16bit-3ep
|
bgunlp
| 2025-08-06T15:13:13Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T15:09:19Z |
---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** bgunlp
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
amos1088/phi3-sft
|
amos1088
| 2025-08-06T15:10:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T12:53:44Z |
---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: transformers
model_name: phi3-sft
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for phi3-sft
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="amos1088/phi3-sft", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.54.1
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
h-grieve/blockassist-bc-bellowing_pouncing_horse_1754492757
|
h-grieve
| 2025-08-06T15:06:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing pouncing horse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-06T15:06:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing pouncing horse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hasnineiftekar9/SO101_test0
|
hasnineiftekar9
| 2025-08-06T15:01:17Z | 7 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:mahmud8248/record-test",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-06T15:01:05Z |
---
datasets: mahmud8248/record-test
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
modaopro/task-13-Qwen-Qwen2.5-1.5B
|
modaopro
| 2025-08-06T15:00:07Z | 52 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"region:us"
] | null | 2025-08-05T00:17:55Z |
---
base_model: Qwen/Qwen2.5-1.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
xylqn7/openai-qwen2.5-7-code
|
xylqn7
| 2025-08-06T14:56:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T14:46:56Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
library_name: transformers
model_name: openai-qwen2.5-7-code
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for openai-qwen2.5-7-code
This model is a fine-tuned version of [unsloth/Qwen2.5-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="xylqn7/openai-qwen2.5-7-code", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/foundary/clarifying-em/runs/pkfnh8nw)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.54.1
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
shuohsuan/act_grasp_1
|
shuohsuan
| 2025-08-06T14:52:53Z | 9 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:shuohsuan/areach",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-06T14:52:35Z |
---
datasets: shuohsuan/areach
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- lerobot
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
eagle0504/gpt-oss-20b-multilingual-reasoner
|
eagle0504
| 2025-08-06T14:52:21Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dataset:eagle0504/gpt-oss-20b-multilingual-reasoner",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T14:34:43Z |
---
base_model: openai/gpt-oss-20b
datasets: eagle0504/gpt-oss-20b-multilingual-reasoner
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [eagle0504/gpt-oss-20b-multilingual-reasoner](https://huggingface.co/datasets/eagle0504/gpt-oss-20b-multilingual-reasoner) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="eagle0504/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
c-ho/2025-08-06-bll-ner_xlm-roberta-base-ner-hrl_classweights
|
c-ho
| 2025-08-06T14:51:22Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-06T14:09:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ekiprop/SST-2-HEURISTIC-Standard_LoRA-Q_V-seed30
|
ekiprop
| 2025-08-06T14:50:13Z | 53 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-06T14:36:43Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- accuracy
model-index:
- name: SST-2-HEURISTIC-Standard_LoRA-Q_V-seed30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST-2-HEURISTIC-Standard_LoRA-Q_V-seed30
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2208
- Accuracy: 0.9392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 0.401 | 0.0950 | 200 | 0.2264 | 0.9163 |
| 0.2908 | 0.1900 | 400 | 0.2063 | 0.9220 |
| 0.2711 | 0.2850 | 600 | 0.2069 | 0.9197 |
| 0.2495 | 0.3800 | 800 | 0.2034 | 0.9358 |
| 0.2435 | 0.4751 | 1000 | 0.2431 | 0.9174 |
| 0.2388 | 0.5701 | 1200 | 0.2091 | 0.9243 |
| 0.2342 | 0.6651 | 1400 | 0.1932 | 0.9266 |
| 0.2286 | 0.7601 | 1600 | 0.2066 | 0.9335 |
| 0.2266 | 0.8551 | 1800 | 0.2041 | 0.9289 |
| 0.2107 | 0.9501 | 2000 | 0.2129 | 0.9323 |
| 0.2245 | 1.0451 | 2200 | 0.1860 | 0.9381 |
| 0.1998 | 1.1401 | 2400 | 0.1892 | 0.9358 |
| 0.2038 | 1.2352 | 2600 | 0.2101 | 0.9289 |
| 0.1947 | 1.3302 | 2800 | 0.2228 | 0.9300 |
| 0.1935 | 1.4252 | 3000 | 0.2030 | 0.9358 |
| 0.1886 | 1.5202 | 3200 | 0.2142 | 0.9312 |
| 0.1975 | 1.6152 | 3400 | 0.1973 | 0.9312 |
| 0.1823 | 1.7102 | 3600 | 0.2401 | 0.9300 |
| 0.1883 | 1.8052 | 3800 | 0.2282 | 0.9335 |
| 0.2007 | 1.9002 | 4000 | 0.2003 | 0.9358 |
| 0.1858 | 1.9952 | 4200 | 0.2312 | 0.9323 |
| 0.179 | 2.0903 | 4400 | 0.2086 | 0.9312 |
| 0.175 | 2.1853 | 4600 | 0.2235 | 0.9289 |
| 0.1751 | 2.2803 | 4800 | 0.2277 | 0.9346 |
| 0.1707 | 2.3753 | 5000 | 0.2167 | 0.9346 |
| 0.1704 | 2.4703 | 5200 | 0.2295 | 0.9381 |
| 0.1726 | 2.5653 | 5400 | 0.2222 | 0.9300 |
| 0.1826 | 2.6603 | 5600 | 0.2038 | 0.9369 |
| 0.1684 | 2.7553 | 5800 | 0.2021 | 0.9323 |
| 0.1589 | 2.8504 | 6000 | 0.2104 | 0.9346 |
| 0.1729 | 2.9454 | 6200 | 0.1957 | 0.9335 |
| 0.1582 | 3.0404 | 6400 | 0.2122 | 0.9369 |
| 0.1501 | 3.1354 | 6600 | 0.2240 | 0.9369 |
| 0.1586 | 3.2304 | 6800 | 0.2060 | 0.9369 |
| 0.1606 | 3.3254 | 7000 | 0.2015 | 0.9346 |
| 0.155 | 3.4204 | 7200 | 0.2069 | 0.9369 |
| 0.1536 | 3.5154 | 7400 | 0.2261 | 0.9369 |
| 0.1569 | 3.6105 | 7600 | 0.2091 | 0.9358 |
| 0.165 | 3.7055 | 7800 | 0.2045 | 0.9369 |
| 0.1518 | 3.8005 | 8000 | 0.2134 | 0.9369 |
| 0.1592 | 3.8955 | 8200 | 0.2142 | 0.9369 |
| 0.1554 | 3.9905 | 8400 | 0.2262 | 0.9381 |
| 0.147 | 4.0855 | 8600 | 0.2250 | 0.9358 |
| 0.1477 | 4.1805 | 8800 | 0.2247 | 0.9381 |
| 0.1453 | 4.2755 | 9000 | 0.2177 | 0.9346 |
| 0.1433 | 4.3705 | 9200 | 0.2180 | 0.9369 |
| 0.1414 | 4.4656 | 9400 | 0.2242 | 0.9381 |
| 0.1391 | 4.5606 | 9600 | 0.2270 | 0.9381 |
| 0.1457 | 4.6556 | 9800 | 0.2175 | 0.9369 |
| 0.137 | 4.7506 | 10000 | 0.2208 | 0.9392 |
| 0.1564 | 4.8456 | 10200 | 0.2165 | 0.9346 |
| 0.1535 | 4.9406 | 10400 | 0.2172 | 0.9358 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
nithishreddy2002/gemma-2-2b-ats-analyzer-merged
|
nithishreddy2002
| 2025-08-06T14:48:37Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-2-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-2b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T14:46:24Z |
---
base_model: unsloth/gemma-2-2b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** nithishreddy2002
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
YangZexi/flan-t5-xl-stance-lora
|
YangZexi
| 2025-08-06T14:47:28Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:google/flan-t5-xl",
"lora",
"transformers",
"arxiv:1910.09700",
"base_model:google/flan-t5-xl",
"region:us"
] | null | 2025-08-06T14:46:17Z |
---
base_model: google/flan-t5-xl
library_name: peft
tags:
- base_model:adapter:google/flan-t5-xl
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
nithishreddy2002/gemma-2-2b-ats-analyzer
|
nithishreddy2002
| 2025-08-06T14:45:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T14:45:14Z |
---
base_model: unsloth/gemma-2-2b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nithishreddy2002
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jasminekitty328/flan-t5-intentconan-qlora
|
jasminekitty328
| 2025-08-06T14:44:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T14:44:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lmstudio-community/Qwen3-4B-Instruct-2507-MLX-8bit
|
lmstudio-community
| 2025-08-06T14:40:13Z | 316 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mlx",
"conversational",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-06T14:39:32Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-4B-Instruct-2507
---
## 💫 Community Model> Qwen3-4B-Instruct-2507 by Qwen
_👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)_.
**Model creator**: [Qwen](https://huggingface.co/Qwen)<br>
**Original model**: [Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507)<br>
**MLX quantization**: provided by [LM Studio team](https://x.com/lmstudio) using [mlx_lm](https://github.com/ml-explore/mlx-lm)<br>
## Technical Details
8-bit quantized version of Qwen3-4B-Instruct-2507 using MLX, optimized for Apple Silicon.
## Special thanks
🙏 Special thanks to the [Apple Machine Learning Research](https://github.com/ml-explore) team for creating [MLX](https://github.com/ml-explore/mlx).
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
lmstudio-community/Qwen3-4B-Instruct-2507-MLX-6bit
|
lmstudio-community
| 2025-08-06T14:39:04Z | 63 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mlx",
"conversational",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-06T14:38:30Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-4B-Instruct-2507
---
## 💫 Community Model> Qwen3-4B-Instruct-2507 by Qwen
_👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)_.
**Model creator**: [Qwen](https://huggingface.co/Qwen)<br>
**Original model**: [Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507)<br>
**MLX quantization**: provided by [LM Studio team](https://x.com/lmstudio) using [mlx_lm](https://github.com/ml-explore/mlx-lm)<br>
## Technical Details
6-bit quantized version of Qwen3-4B-Instruct-2507 using MLX, optimized for Apple Silicon.
## Special thanks
🙏 Special thanks to the [Apple Machine Learning Research](https://github.com/ml-explore) team for creating [MLX](https://github.com/ml-explore/mlx).
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
suusuu93/dialo-finetuned1
|
suusuu93
| 2025-08-06T14:38:06Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T14:37:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kaixuanliu/vit-base-patch16-224-in21k-finetuned-lora-food101
|
Kaixuanliu
| 2025-08-06T14:34:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T14:25:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ekiprop/SST-2-GLoRA-p50-seed30
|
ekiprop
| 2025-08-06T14:34:24Z | 58 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-06T14:19:27Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- accuracy
model-index:
- name: SST-2-GLoRA-p50-seed30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST-2-GLoRA-p50-seed30
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2047
- Accuracy: 0.9518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 0.3708 | 0.0950 | 200 | 0.2292 | 0.9243 |
| 0.2836 | 0.1900 | 400 | 0.2140 | 0.9209 |
| 0.2606 | 0.2850 | 600 | 0.1932 | 0.9266 |
| 0.2345 | 0.3800 | 800 | 0.2018 | 0.9346 |
| 0.2316 | 0.4751 | 1000 | 0.2368 | 0.9197 |
| 0.2243 | 0.5701 | 1200 | 0.2000 | 0.9323 |
| 0.2249 | 0.6651 | 1400 | 0.2126 | 0.9243 |
| 0.2055 | 0.7601 | 1600 | 0.1949 | 0.9381 |
| 0.2182 | 0.8551 | 1800 | 0.1720 | 0.9427 |
| 0.1972 | 0.9501 | 2000 | 0.1763 | 0.9484 |
| 0.2069 | 1.0451 | 2200 | 0.1789 | 0.9438 |
| 0.17 | 1.1401 | 2400 | 0.1914 | 0.9415 |
| 0.1792 | 1.2352 | 2600 | 0.1861 | 0.9472 |
| 0.1805 | 1.3302 | 2800 | 0.2099 | 0.9312 |
| 0.1723 | 1.4252 | 3000 | 0.1966 | 0.9369 |
| 0.1689 | 1.5202 | 3200 | 0.1750 | 0.9484 |
| 0.1646 | 1.6152 | 3400 | 0.1658 | 0.9484 |
| 0.1676 | 1.7102 | 3600 | 0.2016 | 0.9381 |
| 0.1672 | 1.8052 | 3800 | 0.1718 | 0.9495 |
| 0.1741 | 1.9002 | 4000 | 0.1613 | 0.9495 |
| 0.1627 | 1.9952 | 4200 | 0.2029 | 0.9484 |
| 0.1497 | 2.0903 | 4400 | 0.1963 | 0.9392 |
| 0.1399 | 2.1853 | 4600 | 0.1978 | 0.9484 |
| 0.1491 | 2.2803 | 4800 | 0.2054 | 0.9472 |
| 0.1385 | 2.3753 | 5000 | 0.1959 | 0.9472 |
| 0.1447 | 2.4703 | 5200 | 0.2559 | 0.9335 |
| 0.1427 | 2.5653 | 5400 | 0.1981 | 0.9427 |
| 0.1609 | 2.6603 | 5600 | 0.1697 | 0.9484 |
| 0.138 | 2.7553 | 5800 | 0.2065 | 0.9381 |
| 0.1396 | 2.8504 | 6000 | 0.1950 | 0.9461 |
| 0.1322 | 2.9454 | 6200 | 0.1843 | 0.9427 |
| 0.1361 | 3.0404 | 6400 | 0.2207 | 0.9381 |
| 0.1133 | 3.1354 | 6600 | 0.2011 | 0.9392 |
| 0.1174 | 3.2304 | 6800 | 0.1895 | 0.9461 |
| 0.1304 | 3.3254 | 7000 | 0.1863 | 0.9484 |
| 0.1139 | 3.4204 | 7200 | 0.1987 | 0.9484 |
| 0.1243 | 3.5154 | 7400 | 0.2047 | 0.9518 |
| 0.1196 | 3.6105 | 7600 | 0.1947 | 0.9438 |
| 0.1225 | 3.7055 | 7800 | 0.1881 | 0.9495 |
| 0.1237 | 3.8005 | 8000 | 0.1898 | 0.9495 |
| 0.1259 | 3.8955 | 8200 | 0.1992 | 0.9415 |
| 0.117 | 3.9905 | 8400 | 0.2065 | 0.9415 |
| 0.111 | 4.0855 | 8600 | 0.2073 | 0.9438 |
| 0.1026 | 4.1805 | 8800 | 0.2496 | 0.9461 |
| 0.1048 | 4.2755 | 9000 | 0.2433 | 0.9450 |
| 0.1029 | 4.3705 | 9200 | 0.2255 | 0.9450 |
| 0.1085 | 4.4656 | 9400 | 0.2170 | 0.9450 |
| 0.1024 | 4.5606 | 9600 | 0.2116 | 0.9484 |
| 0.1086 | 4.6556 | 9800 | 0.2068 | 0.9495 |
| 0.1045 | 4.7506 | 10000 | 0.1989 | 0.9484 |
| 0.1098 | 4.8456 | 10200 | 0.2011 | 0.9484 |
| 0.1057 | 4.9406 | 10400 | 0.2013 | 0.9507 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.