| library_name: transformers | |
| license: mit | |
| language: | |
| - en | |
| metrics: | |
| - f1 | |
| - precision | |
| - recall | |
| base_model: | |
| - answerdotai/ModernBERT-large | |
| pipeline_tag: text-classification | |
| # ModernBERT large for classifying the sentiment in communications | |
| This model classifies the sentiment in developer interactions (e.g., GitHub, Stack Overflow) as 'positive', 'neutral' or 'negative'. | |
| - **Developed by:** Fabian C. Peña, Steffen Herbold | |
| - **Finetuned from:** [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) | |
| - **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark) | |
| - **Language:** English | |
| - **License:** MIT | |
| ## Citation | |
| ``` | |
| @misc{pena2025benchmark, | |
| author = {Fabian Peña and Steffen Herbold}, | |
| title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks}, | |
| year = {2025} | |
| } | |
| ``` | |