| library_name: transformers | |
| license: mit | |
| language: | |
| - en | |
| metrics: | |
| - accuracy | |
| - perplexity | |
| base_model: | |
| - answerdotai/ModernBERT-base | |
| pipeline_tag: fill-mask | |
| # ModernBERT base for filling user actions in requirement specifications | |
| This model fills masks ([MASK]) in requirements specifications. During the fine-tuning process, POS verbs were used as a proxy of user actions. | |
| - **Developed by:** Fabian C. Peña, Steffen Herbold | |
| - **Finetuned from:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) | |
| - **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark) | |
| - **Language:** English | |
| - **License:** MIT | |
| ## Citation | |
| ``` | |
| @misc{pena2025benchmark, | |
| author = {Fabian Peña and Steffen Herbold}, | |
| title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks}, | |
| year = {2025} | |
| } | |
| ``` | |