PyTorch
qwen2
xssstory commited on
Commit
1ffad9d
·
verified ·
1 Parent(s): bec4a23

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -9,6 +9,8 @@ base_model:
9
  ### Instruction
10
  [![GitHub](https://img.shields.io/badge/GitHub-Repository-black?logo=github)](https://github.com/inclusionAI/ASearcher)
11
 
 
 
12
  ASearcher is an open-source framework designed for large-scale online reinforcement learning (RL) training of search agents. Our mission is to advance Search Intelligence to expert-level performance. We are fully committed to open-source by releasing model weights, detailed training methodologies, and data construction pipelines. Additionally, we provide comprehensive guidance on building and training customized agents based on AReaL. ASearcher empowers developers to build their own high-performance search agents easily and cost-effectively.
13
 
14
  We have released multiple models trained with different settings and based on foundation models of varying sizes. These models have achieved outstanding performance on Single-Hop / Multi-Hop QA and more challenging tool-augmented benchmarks like GAIA, Xbench.
 
9
  ### Instruction
10
  [![GitHub](https://img.shields.io/badge/GitHub-Repository-black?logo=github)](https://github.com/inclusionAI/ASearcher)
11
 
12
+ 📄Paper: [https://arxiv.org/abs/2508.07976](https://arxiv.org/abs/2508.07976)
13
+
14
  ASearcher is an open-source framework designed for large-scale online reinforcement learning (RL) training of search agents. Our mission is to advance Search Intelligence to expert-level performance. We are fully committed to open-source by releasing model weights, detailed training methodologies, and data construction pipelines. Additionally, we provide comprehensive guidance on building and training customized agents based on AReaL. ASearcher empowers developers to build their own high-performance search agents easily and cost-effectively.
15
 
16
  We have released multiple models trained with different settings and based on foundation models of varying sizes. These models have achieved outstanding performance on Single-Hop / Multi-Hop QA and more challenging tool-augmented benchmarks like GAIA, Xbench.