--- license: mit --- This is the evaluation benchmark of [LLM4Decompile](https://github.com/albertan017/LLM4Decompile) project. ## About * **Decompile-Bench-Eval** includes manually crafted binaries from the well-established HumanEval and MBPP, alongside the compiled GitHub repositories released after 2025 to mitigate data leakage issues. * **C and C++ Support:** These datasets include both **C** and **C++** source code, whereas earlier models (LLM4Decompile-V1.5, V2) and the HumanEval-Decompile dataset were limited to C only. ## Columns It contains three splits, huameval, mbpp, and github2025. We also provide a [json verison](https://github.com/albertan017/LLM4Decompile/tree/main/decompile-bench/data) for the data. They contains the following columns: ``` { "index":"index of the function", "func_name":"demangled name for he function", "func_dep":"function dependecies (includes, help functions), or the path to the source code", "func":"source code", "test":"unit tests for the function, empty for github data", "opt":"optimization, O0, O1, O2, O3", "language":"language, c or cpp", "asm":"assembly", "ida_asm":"assembly from ida pro", "ida_pseudo":"decompiled results (pseudo code) from ida pro", "ghidra_asm":"assembly from ghidra", "ghidra_pseudo":"decompiled results (pseudo code) from ghidra" } ``` ## Others For more details, please check [LLM4Decompile](https://github.com/albertan017/LLM4Decompile) project.