|
--- |
|
library_name: transformers |
|
license: llama3.2 |
|
language: |
|
- en |
|
metrics: |
|
- f1 |
|
- precision |
|
- recall |
|
base_model: |
|
- meta-llama/Llama-3.2-1B |
|
pipeline_tag: text-classification |
|
--- |
|
|
|
# Llama 3.2 1b for classifying code comments (multi-label) |
|
|
|
This model classifies comments in Python code as 'usage', 'parameters', 'developmentNotes', 'expand' or 'summary'. |
|
|
|
- **Developed by:** Fabian C. Peña, Steffen Herbold |
|
- **Finetuned from:** [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) |
|
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark) |
|
- **Language:** English |
|
- **License:** Llama 3.2 Community License Agreement |
|
|
|
## Citation |
|
|
|
``` |
|
@misc{pena2025benchmark, |
|
author = {Fabian Peña and Steffen Herbold}, |
|
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks}, |
|
year = {2025} |
|
} |
|
``` |
|
|