nielsr HF Staff commited on
Commit
f71df3b
·
verified ·
1 Parent(s): a46d2d2

Improve model card with metadata, abstract, and sample usage

Browse files

This PR significantly improves the model card for the CodeARC model by:

- Adding the `pipeline_tag: text-generation` to the metadata, which ensures the model can be easily discovered on the Hugging Face Hub (e.g., at https://huggingface.co/models?pipeline_tag=text-generation).
- Specifying the `library_name: transformers` in the metadata, making it clear which library is compatible with this model and enabling direct loading with the `transformers` library.
- Including the paper's abstract in the model card content for a concise overview of the research.
- Adding a practical "Sample Usage" section, extracted from the GitHub repository's README, to guide users on how to quickly set up and run the model.

These changes enhance the discoverability, usability, and informative nature of the model card.

Files changed (1) hide show
  1. README.md +58 -11
README.md CHANGED
@@ -1,36 +1,83 @@
1
  ---
2
- license: apache-2.0
3
  base_model:
4
  - meta-llama/Llama-3.1-8B-Instruct
 
 
 
5
  tags:
6
  - reasoning
7
  - agent
8
  - program
9
  - code
10
  ---
 
11
  **CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis**
12
 
 
13
 
14
- Paper: https://arxiv.org/pdf/2503.23145
15
 
16
- Code: https://github.com/Anjiang-Wei/CodeARC
17
 
18
- Website: https://anjiang-wei.github.io/CodeARC-Website/
19
 
20
- Dataset: https://huggingface.co/datasets/anjiangwei/CodeARC-Problems
 
 
 
 
 
 
 
21
 
22
- 10 Input-Output examples for each problem: https://huggingface.co/datasets/anjiangwei/CodeARC-Invocations
23
 
24
- Fine-tuned models:
25
- https://huggingface.co/LLM4Code/CodeARC_annotated_llama3.1
26
 
27
- https://huggingface.co/LLM4Code/CodeARC_anonymous_llama3.1
 
28
 
29
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  @article{wei2025codearc,
31
  title={CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis},
32
  author={Wei, Anjiang and Suresh, Tarun and Cao, Jiannan and Kannan, Naveen and Wu, Yuheng and Yan, Kai and Teixeira, Thiago SFX and Wang, Ke and Aiken, Alex},
33
  journal={arXiv preprint arXiv:2503.23145},
34
  year={2025}
35
  }
36
- ```
 
 
 
 
 
1
  ---
 
2
  base_model:
3
  - meta-llama/Llama-3.1-8B-Instruct
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
  tags:
8
  - reasoning
9
  - agent
10
  - program
11
  - code
12
  ---
13
+
14
  **CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis**
15
 
16
+ This model was presented in the paper [CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis](https://huggingface.co/papers/2503.23145).
17
 
18
+ ### Abstract
19
 
20
+ Inductive program synthesis, or programming by example, requires synthesizing functions from input-output examples that generalize to unseen inputs. While large language model agents have shown promise in programming tasks guided by natural language, their ability to perform inductive program synthesis is underexplored. Existing evaluation protocols rely on static sets of examples and held-out tests, offering no feedback when synthesized functions are incorrect and failing to reflect real-world scenarios such as reverse engineering. We propose CodeARC, the Code Abstraction and Reasoning Challenge, a new evaluation framework where agents interact with a hidden target function by querying it with new inputs, synthesizing candidate functions, and iteratively refining their solutions using a differential testing oracle. This interactive setting encourages agents to perform function calls and self-correction based on feedback. We construct the first large-scale benchmark for general-purpose inductive program synthesis, featuring 1114 functions. Among 18 models evaluated, o3-mini performs best with a success rate of 52.7%, highlighting the difficulty of this task. Fine-tuning LLaMA-3.1-8B-Instruct on curated synthesis traces yields up to a 31% relative performance gain. CodeARC provides a more realistic and challenging testbed for evaluating LLM-based program synthesis and inductive reasoning. Our code, data, and models are publicly available at this https URL
21
 
22
+ ### Project Links
23
 
24
+ * **Paper**: https://arxiv.org/pdf/2503.23145
25
+ * **Code**: https://github.com/Anjiang-Wei/CodeARC
26
+ * **Website**: https://anjiang-wei.github.io/CodeARC-Website/
27
+ * **Problems Dataset**: https://huggingface.co/datasets/anjiangwei/CodeARC-Problems
28
+ * **Invocations Dataset (10 Input-Output examples)**: https://huggingface.co/datasets/anjiangwei/CodeARC-Invocations
29
+ * **Fine-tuned models**:
30
+ * https://huggingface.co/LLM4Code/CodeARC_annotated_llama3.1
31
+ * https://huggingface.co/LLM4Code/CodeARC_anonymous_llama3.1
32
 
33
+ ### Sample Usage
34
 
35
+ To get started with the model and run evaluation:
 
36
 
37
+ 1. **Setting Up the Environment**:
38
+ Create and activate a Conda environment and install dependencies:
39
 
40
+ ```bash
41
+ conda create -y -n CodeARC python=3.10.12
42
+ conda activate CodeARC
43
+ pip install -r requirements.txt
44
+ ```
45
+
46
+ 2. **Set API keys**:
47
+ Ensure you have valid API keys for the required services:
48
+
49
+ ```bash
50
+ export OPENAI_API_KEY=<your_openai_api_key>
51
+ export ANTHROPIC_API_KEY=<your_anthropic_api_key>
52
+ export TOGETHER_API_KEY=<your_together_api_key>
53
+ ```
54
+
55
+ 3. **Running Main Evaluation**:
56
+ You can run an evaluation with a supported model. For testing purposes, you can limit the evaluation to fewer problems using `--total_idx`.
57
+
58
+ ```python
59
+ python3 run.py --model_name openai/gpt-4o-mini --total_idx 20
60
+ ```
61
+ (Supported models include OpenAI models (e.g., `openai/gpt-4o`), Anthropic models (e.g., `anthropic/claude-3-7-sonnet-20250219`), and models served by Together AI (e.g., `meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo`).)
62
+
63
+ To summarize results:
64
+ ```python
65
+ python3 src/compute_metrics.py
66
+ ```
67
+
68
+ ### Citation
69
+
70
+ If you use this model or repository in your research, please cite the corresponding paper:
71
+
72
+ ```bibtex
73
  @article{wei2025codearc,
74
  title={CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis},
75
  author={Wei, Anjiang and Suresh, Tarun and Cao, Jiannan and Kannan, Naveen and Wu, Yuheng and Yan, Kai and Teixeira, Thiago SFX and Wang, Ke and Aiken, Alex},
76
  journal={arXiv preprint arXiv:2503.23145},
77
  year={2025}
78
  }
79
+ ```
80
+
81
+ ### License
82
+
83
+ This project is licensed under the [Apache 2.0 License](https://opensource.org/licenses/Apache-2.0).