|
--- |
|
language: |
|
- en |
|
dataset_info: |
|
features: |
|
- name: prompt |
|
dtype: string |
|
- name: completion |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 3111855.6398965064 |
|
num_examples: 2000 |
|
- name: validation |
|
num_bytes: 311390.6933457422 |
|
num_examples: 200 |
|
- name: test |
|
num_bytes: 311850.6638180986 |
|
num_examples: 200 |
|
download_size: 1739330 |
|
dataset_size: 3735096.9970603473 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: validation |
|
path: data/validation-* |
|
- split: test |
|
path: data/test-* |
|
--- |
|
|
|
This dataset was designed for the fine-tune of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct) using GRPO. |
|
It is designed to summarize Reddit posts. |
|
|
|
You can reproduce this training using this [colab notebook](https://colab.research.google.com/drive/13mRqgRIvMGGgkQfJL4CS0lzcL4Vl9xUN?usp=sharing). It takes about 40 minutes to train the model. |
|
|