Improve model card: Add pipeline tag, library name, abstract, and usage example
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,36 +1,108 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
base_model:
|
4 |
- meta-llama/Llama-3.1-8B-Instruct
|
|
|
5 |
tags:
|
6 |
- reasoning
|
7 |
- agent
|
8 |
- program
|
9 |
- code
|
|
|
|
|
10 |
---
|
11 |
-
**CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis**
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
-
|
|
|
|
|
15 |
|
16 |
-
|
|
|
|
|
17 |
|
18 |
-
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
|
|
|
|
23 |
|
24 |
-
|
25 |
-
|
|
|
|
|
26 |
|
27 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
@article{wei2025codearc,
|
31 |
title={CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis},
|
32 |
author={Wei, Anjiang and Suresh, Tarun and Cao, Jiannan and Kannan, Naveen and Wu, Yuheng and Yan, Kai and Teixeira, Thiago SFX and Wang, Ke and Aiken, Alex},
|
33 |
journal={arXiv preprint arXiv:2503.23145},
|
34 |
year={2025}
|
35 |
}
|
36 |
-
```
|
|
|
|
|
|
|
|
|
|
1 |
---
|
|
|
2 |
base_model:
|
3 |
- meta-llama/Llama-3.1-8B-Instruct
|
4 |
+
license: apache-2.0
|
5 |
tags:
|
6 |
- reasoning
|
7 |
- agent
|
8 |
- program
|
9 |
- code
|
10 |
+
pipeline_tag: text-generation
|
11 |
+
library_name: transformers
|
12 |
---
|
|
|
13 |
|
14 |
+
# CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis
|
15 |
+
|
16 |
+
Inductive program synthesis, or programming by example, requires synthesizing functions from input-output examples that generalize to unseen inputs. While large language model agents have shown promise in programming tasks guided by natural language, their ability to perform inductive program synthesis is underexplored. This work proposes CodeARC, the Code Abstraction and Reasoning Challenge, a new evaluation framework where agents interact with a hidden target function by querying it with new inputs, synthesizing candidate functions, and iteratively refining their solutions using a differential testing oracle. This interactive setting encourages agents to perform function calls and self-correction based on feedback, providing a more realistic and challenging testbed for evaluating LLM-based program synthesis and inductive reasoning. The model in this repository is a fine-tuned LLaMA-3.1-8B-Instruct model optimized for these tasks.
|
17 |
+
|
18 |
+
## Paper
|
19 |
+
[CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis](https://huggingface.co/papers/2503.23145)
|
20 |
+
|
21 |
+
## Code
|
22 |
+
[https://github.com/Anjiang-Wei/CodeARC](https://github.com/Anjiang-Wei/CodeARC)
|
23 |
+
|
24 |
+
## Website
|
25 |
+
[https://anjiang-wei.github.io/CodeARC-Website/](https://anjiang-wei.github.io/CodeARC-Website/)
|
26 |
|
27 |
+
## Datasets
|
28 |
+
* **Problems Dataset**: [anjiangwei/CodeARC-Problems](https://huggingface.co/datasets/anjiangwei/CodeARC-Problems)
|
29 |
+
* **10 Input-Output examples for each problem**: [anjiangwei/CodeARC-Invocations](https://huggingface.co/datasets/anjiangwei/CodeARC-Invocations)
|
30 |
|
31 |
+
## Fine-tuned models
|
32 |
+
* [https://huggingface.co/LLM4Code/CodeARC_annotated_llama3.1](https://huggingface.co/LLM4Code/CodeARC_annotated_llama3.1)
|
33 |
+
* [https://huggingface.co/LLM4Code/CodeARC_anonymous_llama3.1](https://huggingface.co/LLM4Code/CodeARC_anonymous_llama3.1)
|
34 |
|
35 |
+
## Usage
|
36 |
|
37 |
+
You can use this fine-tuned model with the `transformers` library for text generation tasks.
|
38 |
|
39 |
+
```python
|
40 |
+
import torch
|
41 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
42 |
|
43 |
+
# Ensure you replace 'your-model-id' with the actual model ID of this repository
|
44 |
+
# For example, if this model is LLM4Code/CodeARC_annotated_llama3.1, use that.
|
45 |
+
# Assuming this model is one of the fine-tuned versions based on context.
|
46 |
+
model_name = "LLM4Code/CodeARC_annotated_llama3.1" # Example, please adjust if different
|
47 |
|
48 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
49 |
+
model = AutoModelForCausalLM.from_pretrained(
|
50 |
+
model_name,
|
51 |
+
torch_dtype=torch.bfloat16, # Or torch.float16 if bfloat16 is not supported
|
52 |
+
device_map="auto",
|
53 |
+
)
|
54 |
+
model.eval()
|
55 |
|
56 |
+
# Example prompt for inductive program synthesis
|
57 |
+
# This example asks for a Python function based on input-output pairs
|
58 |
+
prompt = """<|begin_of_text|><|start_header_id|>user<|end_header_id|>
|
59 |
+
|
60 |
+
Synthesize a Python function `sum_list` that takes a list of integers and returns their sum.
|
61 |
+
|
62 |
+
Input: [1, 2, 3]
|
63 |
+
Output: 6
|
64 |
+
|
65 |
+
Input: [5, 0, -5]
|
66 |
+
Output: 0<|eot_id|>
|
67 |
+
<|start_header_id|>assistant<|end_header_id|>
|
68 |
+
|
69 |
+
```python
|
70 |
+
def sum_list(numbers):
|
71 |
+
# Your code here
|
72 |
```
|
73 |
+
"""
|
74 |
+
|
75 |
+
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
76 |
+
|
77 |
+
# Generate response
|
78 |
+
generation_output = model.generate(
|
79 |
+
**inputs,
|
80 |
+
max_new_tokens=100,
|
81 |
+
do_sample=True,
|
82 |
+
top_p=0.9,
|
83 |
+
temperature=0.6,
|
84 |
+
eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eom_id|>")]
|
85 |
+
)
|
86 |
+
generated_text = tokenizer.decode(generation_output[0], skip_special_tokens=True)
|
87 |
+
|
88 |
+
print(generated_text)
|
89 |
+
```
|
90 |
+
|
91 |
+
For more detailed usage, evaluation scripts, and setting up the full CodeARC environment, please refer to the [official GitHub repository](https://github.com/Anjiang-Wei/CodeARC).
|
92 |
+
|
93 |
+
## Citation
|
94 |
+
|
95 |
+
If you use this model or the CodeARC framework in your research, please cite the corresponding paper:
|
96 |
+
|
97 |
+
```bibtex
|
98 |
@article{wei2025codearc,
|
99 |
title={CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis},
|
100 |
author={Wei, Anjiang and Suresh, Tarun and Cao, Jiannan and Kannan, Naveen and Wu, Yuheng and Yan, Kai and Teixeira, Thiago SFX and Wang, Ke and Aiken, Alex},
|
101 |
journal={arXiv preprint arXiv:2503.23145},
|
102 |
year={2025}
|
103 |
}
|
104 |
+
```
|
105 |
+
|
106 |
+
## License
|
107 |
+
|
108 |
+
This project is licensed under the Apache 2.0 License. See the [LICENSE](https://opensource.org/licenses/Apache-2.0) file for details.
|