WHAT_IS_F1_HTML = """

FormulaOne

Frontier AI models have recently demonstrated strong performance on mathematical and algorithmic benchmarks, including earning gold medals in olympiads, and attaining top percentile ratings in competitive programming contests. How well do such benchmarks capture the true depth of algorithmic reasoning, as it arises in real-world research problems?

We believe that existing benchmarks fail to capture the deep reasoning skills required for complex, research-level algorithmic problems. To address this gap, we introduce FormulaOne.

FormulaOne consists of 220 novel dynamic programming problems over graphs. The problems are organised into three categories, ranging from moderate difficulty and all the way up to research-level.

Category Description
FormulaOne Warmup A set of 100 “easier” problems.
FormulaOne Tier 1 A set of 100 challenging problems.
FormulaOne Tier 2 A set of 20 highly challenging problems.

Warmup: Union-of-Paths-and-Cycles

Objective: Compute the sum of all weights of sets S⊆V such that the graph G[S], induced over S, is a collection of disjoint paths and cycles.

Tier 1: Maximal-Union-of-Paths-and-Cycles

Objective: Compute the sum of all weights of sets S⊆V such that the graph G[S], induced over S, is a collection of disjoint paths and cycles, and S is maximal with respect to this property.

Tier 2: Maximal-Union-of-Cycles

Objective: Compute the sum of all weights of sets S⊆V such that the graph G[S], induced over S, is a collection of disjoint cycles, and S is maximal with respect to this property.

The latter category is incredibly demanding, requiring resolution of many points of uncertainty, and involving an array of reasoning steps, including topological and geometric insight, knowledge of mathematical domains such as extremal graph theory and logic, combinatorial considerations, precise implementation, and more.

Despite Frontier models’ impressive performance on existing benchmarks, presently no model solves even a single FormulaOne Tier 2 problem.1

An “Infinite Well” of Problems

The novelty and vastness of FormulaOne stems from its theoretical foundation. The questions are not arbitrary puzzles, but are instead drawn from the highly expressive framework of Monadic Second-Order (MSO) logic on graphs. This provides a principled, semi-automatic way to generate a virtually infinite supply of mathematically deep algorithmic challenges. Despite their theoretical underpinnings, the problems in FormulaOne are natural and succinct:

Problem #44

Input: A tree-like graph G=(V,E), a tree decomposition T of G, and a weight function w:V→N.

Objective: Compute the sum of all weights of sets S⊆V such that the graph G[S], induced over S, does not contain any cycle of length four.

Notation: The weight of a set of vertices S is defined as w(S) ≜ ∑v∈Sw(v). The final result should be returned modulo 109+7.

While the problems are often natural to state, their solutions are far from obvious. The solvability of this vast class of problems is guaranteed by an algorithmic meta-theorem due to Courcelle, which broadly states:

“For every sufficiently tree-like graph, any problem definable in an expressive formal logic — Monadic Second-Order (MSO) logic — can be solved by a dynamic programming algorithm that operates in time linear in the order of the graph.”

The key is to use a structure known as a tree decomposition, which organises the graph’s vertices into a series of overlapping sets, or “bags”, that are themselves arranged in a tree.

An illustration of local modifications to bags (dashed boxes)
An illustration of local modifications to bags: Introduce, Forget, and Join.

An algorithm can then traverse this tree of bags, solving the problem piece by piece using dynamic programming. This process involves designing a “state” that summarises all necessary information about the partial solution within a bag, and then defining how this state transforms as vertices are introduced, forgotten, or bags are merged.

Animation showing the design of a compressed dynamic programming state-space.

The deceptive simplicity of the problem statements belies the extraordinary difficulty of discovering the correct dynamic programming solution. This process is riddled with subtle combinatorial and logical pitfalls, demanding a profound understanding of the problem’s underlying structure. For a detailed walkthrough of the fifteen interdependent reasoning steps required to solve a single hard problem – Maximal-Cluster-Graph – see the appendix of our paper.

Guiding Principles

Evaluation

To give models the best possible chance of success, we provide a generous few-shot prompt that covers a broad array of the ideas and techniques involved in solving these problems. All models were evaluated using their highest available reasoning settings and with the maximum context length permitted.

Each submitted solution is subjected to a rigorous and automated test suite that measures three key aspects of its validity:

To support research and encourage community contributions, the FormulaOne-Warmup dataset is released as a public resource for training and fine-tuning models. The complete test suite for all 100 Warmup problems is available, alongside a standalone evaluation environment, in our public GitHub repository: https://github.com/double-ai/formulaone-dataset/tree/main.

In contrast, to maintain the integrity of the core benchmark, only a minimal subset of tests is released for the FormulaOne Tier 1 and Tier 2 problems.

Model Accuracy

On the FormulaOne-Warmup problems, frontier models perform reasonably well. This confirms they have a foundational capability for these types of algorithmic tasks.

Plot showing model performance on FormulaOne-Warmup
Performance of frontier models on the FormulaOne-Warmup dataset.

However, as the reasoning depth increases in FormulaOne Tier 1, and solutions require the discovery and integration of novel and more complex state representations, model performance drops off sharply.

Plot showing model performance on FormulaOne Tier 1
Figure 1: Performance of frontier reasoning models on the FormulaOne dataset.

This trend culminates in FormulaOne Tier 2, where the difficulty is characteristic of exploratory research problems. On this set of 20 problems, no current frontier model solves even a single one. This result starkly illustrates the gap that remains between high performance on existing benchmarks and the deep algorithmic reasoning required for truly complex problems.

""" LLM_BENCHMARKS_TEXT = """ ## How it works ## Reproducibility To reproduce our results, here is the commands you can run: """ EVALUATION_QUEUE_TEXT = """ ## Submitting to the FormulaOne Leaderboard This leaderboard evaluates systems on the FormulaOne core dataset. Submissions consist of a .jsonl file with solution code for each problem. ### 📁 I. Format Your Submission File Your submission must be a .jsonl file with one entry per problem: ```json {"problem_id": "1", "solution": ""} {"problem_id": "2", "solution": ""} ... ``` - problem_id: Must match the official list of FormulaOne core problems. - solution: A Python code implementing the required callback functions. 📄 Full list of problem_ids: View the [FormulaOne core dataset](https://github.com/double-ai/formulaone-dataset-release/dataset/formulaone) for the complete list of problem IDs. ⚠️ Validation Rules: Submissions must: - Contain exactly two columns: ["problem_id", "solution"] - Include all required problems (no missing/unknown IDs) - Provide solutions as Python strings - Avoid duplicates ### 📤 II. Submit via the UI below - Upload your `.jsonl` file. - Fill in the following fields: - **System Name** - **Organization** - **System Type** - Click **Submit**. ### ⏱️ After Submission Submissions are validated and evaluated within ~24 hours. Results will appear on the leaderboard once processed. """ CITATION_BUTTON_LABEL = """📚 How to cite FormulaOne""" CITATION_BUTTON_TEXT = r""" @misc{beniamini2025formulaonemeasuringdepthalgorithmic, title={FormulaOne: Measuring the Depth of Algorithmic Reasoning Beyond Competitive Programming}, author={Gal Beniamini and Yuval Dor and Alon Vinnikov and Shir Granot Peled and Or Weinstein and Or Sharir and Noam Wies and Tomer Nussbaum and Nadav Schweiger and Ido Ben Shaul and Tomer Zekharya and Yoav Levine and Shai Shalev-Shwartz and Amnon Shashua}, year={2025}, eprint={2507.13337}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2507.13337}, } """