nielsr HF Staff commited on
Commit
6d3b777
·
verified ·
1 Parent(s): 3684224

Improve model card: Add pipeline tag, library name, abstract, and usage example

Browse files

This PR significantly improves the model card for the CodeARC fine-tuned LLaMA-3.1-8B-Instruct model by adding essential metadata and enriching the content:

- **Metadata `pipeline_tag: text-generation`**: This tag enables users to easily discover the model via the Hugging Face Hub's pipeline filters (e.g., [https://huggingface.co/models?pipeline_tag=text-generation](https://huggingface.co/models?pipeline_tag=text-generation)).
- **Metadata `library_name: transformers`**: This ensures the model is recognized as compatible with the Hugging Face Transformers library, activating the "Use in Transformers" widget and providing relevant code snippets directly on the model page.
- **Comprehensive Description**: The model card now includes a concise abstract of the paper, providing users with a clear understanding of the model's purpose and the CodeARC framework.
- **Updated Paper Link**: The paper link has been updated to point to the Hugging Face Papers page, integrating the model card more tightly with the Hub's academic resources.
- **Sample Usage**: A practical Python code example is included, demonstrating how to load and use the model for text generation with the `transformers` library, facilitating quick experimentation and integration for users.
- **Explicit License**: An explicit license section is added for clarity.

These changes enhance the model's discoverability, usability, and overall presentation on the Hugging Face Hub.

Files changed (1) hide show
  1. README.md +83 -11
README.md CHANGED
@@ -1,36 +1,108 @@
1
  ---
2
- license: apache-2.0
3
  base_model:
4
  - meta-llama/Llama-3.1-8B-Instruct
 
5
  tags:
6
  - reasoning
7
  - agent
8
  - program
9
  - code
 
 
10
  ---
11
- **CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis**
12
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
- Paper: https://arxiv.org/pdf/2503.23145
 
 
15
 
16
- Code: https://github.com/Anjiang-Wei/CodeARC
 
 
17
 
18
- Website: https://anjiang-wei.github.io/CodeARC-Website/
19
 
20
- Dataset: https://huggingface.co/datasets/anjiangwei/CodeARC-Problems
21
 
22
- 10 Input-Output examples for each problem: https://huggingface.co/datasets/anjiangwei/CodeARC-Invocations
 
 
23
 
24
- Fine-tuned models:
25
- https://huggingface.co/LLM4Code/CodeARC_annotated_llama3.1
 
 
26
 
27
- https://huggingface.co/LLM4Code/CodeARC_anonymous_llama3.1
 
 
 
 
 
 
28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  @article{wei2025codearc,
31
  title={CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis},
32
  author={Wei, Anjiang and Suresh, Tarun and Cao, Jiannan and Kannan, Naveen and Wu, Yuheng and Yan, Kai and Teixeira, Thiago SFX and Wang, Ke and Aiken, Alex},
33
  journal={arXiv preprint arXiv:2503.23145},
34
  year={2025}
35
  }
36
- ```
 
 
 
 
 
1
  ---
 
2
  base_model:
3
  - meta-llama/Llama-3.1-8B-Instruct
4
+ license: apache-2.0
5
  tags:
6
  - reasoning
7
  - agent
8
  - program
9
  - code
10
+ pipeline_tag: text-generation
11
+ library_name: transformers
12
  ---
 
13
 
14
+ # CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis
15
+
16
+ Inductive program synthesis, or programming by example, requires synthesizing functions from input-output examples that generalize to unseen inputs. While large language model agents have shown promise in programming tasks guided by natural language, their ability to perform inductive program synthesis is underexplored. This work proposes CodeARC, the Code Abstraction and Reasoning Challenge, a new evaluation framework where agents interact with a hidden target function by querying it with new inputs, synthesizing candidate functions, and iteratively refining their solutions using a differential testing oracle. This interactive setting encourages agents to perform function calls and self-correction based on feedback, providing a more realistic and challenging testbed for evaluating LLM-based program synthesis and inductive reasoning. The model in this repository is a fine-tuned LLaMA-3.1-8B-Instruct model optimized for these tasks.
17
+
18
+ ## Paper
19
+ [CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis](https://huggingface.co/papers/2503.23145)
20
+
21
+ ## Code
22
+ [https://github.com/Anjiang-Wei/CodeARC](https://github.com/Anjiang-Wei/CodeARC)
23
+
24
+ ## Website
25
+ [https://anjiang-wei.github.io/CodeARC-Website/](https://anjiang-wei.github.io/CodeARC-Website/)
26
 
27
+ ## Datasets
28
+ * **Problems Dataset**: [anjiangwei/CodeARC-Problems](https://huggingface.co/datasets/anjiangwei/CodeARC-Problems)
29
+ * **10 Input-Output examples for each problem**: [anjiangwei/CodeARC-Invocations](https://huggingface.co/datasets/anjiangwei/CodeARC-Invocations)
30
 
31
+ ## Fine-tuned models
32
+ * [https://huggingface.co/LLM4Code/CodeARC_annotated_llama3.1](https://huggingface.co/LLM4Code/CodeARC_annotated_llama3.1)
33
+ * [https://huggingface.co/LLM4Code/CodeARC_anonymous_llama3.1](https://huggingface.co/LLM4Code/CodeARC_anonymous_llama3.1)
34
 
35
+ ## Usage
36
 
37
+ You can use this fine-tuned model with the `transformers` library for text generation tasks.
38
 
39
+ ```python
40
+ import torch
41
+ from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
42
 
43
+ # Ensure you replace 'your-model-id' with the actual model ID of this repository
44
+ # For example, if this model is LLM4Code/CodeARC_annotated_llama3.1, use that.
45
+ # Assuming this model is one of the fine-tuned versions based on context.
46
+ model_name = "LLM4Code/CodeARC_annotated_llama3.1" # Example, please adjust if different
47
 
48
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
49
+ model = AutoModelForCausalLM.from_pretrained(
50
+ model_name,
51
+ torch_dtype=torch.bfloat16, # Or torch.float16 if bfloat16 is not supported
52
+ device_map="auto",
53
+ )
54
+ model.eval()
55
 
56
+ # Example prompt for inductive program synthesis
57
+ # This example asks for a Python function based on input-output pairs
58
+ prompt = """<|begin_of_text|><|start_header_id|>user<|end_header_id|>
59
+
60
+ Synthesize a Python function `sum_list` that takes a list of integers and returns their sum.
61
+
62
+ Input: [1, 2, 3]
63
+ Output: 6
64
+
65
+ Input: [5, 0, -5]
66
+ Output: 0<|eot_id|>
67
+ <|start_header_id|>assistant<|end_header_id|>
68
+
69
+ ```python
70
+ def sum_list(numbers):
71
+ # Your code here
72
  ```
73
+ """
74
+
75
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
76
+
77
+ # Generate response
78
+ generation_output = model.generate(
79
+ **inputs,
80
+ max_new_tokens=100,
81
+ do_sample=True,
82
+ top_p=0.9,
83
+ temperature=0.6,
84
+ eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eom_id|>")]
85
+ )
86
+ generated_text = tokenizer.decode(generation_output[0], skip_special_tokens=True)
87
+
88
+ print(generated_text)
89
+ ```
90
+
91
+ For more detailed usage, evaluation scripts, and setting up the full CodeARC environment, please refer to the [official GitHub repository](https://github.com/Anjiang-Wei/CodeARC).
92
+
93
+ ## Citation
94
+
95
+ If you use this model or the CodeARC framework in your research, please cite the corresponding paper:
96
+
97
+ ```bibtex
98
  @article{wei2025codearc,
99
  title={CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis},
100
  author={Wei, Anjiang and Suresh, Tarun and Cao, Jiannan and Kannan, Naveen and Wu, Yuheng and Yan, Kai and Teixeira, Thiago SFX and Wang, Ke and Aiken, Alex},
101
  journal={arXiv preprint arXiv:2503.23145},
102
  year={2025}
103
  }
104
+ ```
105
+
106
+ ## License
107
+
108
+ This project is licensed under the Apache 2.0 License. See the [LICENSE](https://opensource.org/licenses/Apache-2.0) file for details.