File size: 8,817 Bytes
62f385b c357497 62f385b c357497 62f385b aea2063 62f385b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 |
---
license: cc-by-4.0
language:
- en
tags:
- Large Language Models
- LLM Evaluation
- Sequential Reasoning
- Scaling Laws
- Synthetic Benchmarks
- Commonsense Reasoning
- Spatial Reasoning
- Knowledge Graphs
---
# SeqBench: A Tunable Benchmark to Quantify Sequential Reasoning Limits of LLMs
## Description
SeqBench is a programmatically generated benchmark designed to rigorously evaluate and analyze the sequential reasoning capabilities of language models. Task instances involve pathfinding in 2D grid environments, requiring models to perform multi-step inference over a combination of relevant and distracting textual facts.
The benchmark allows for fine-grained, orthogonal control over key complexity dimensions:
1. **Logical Depth (L)**: The number of actions in the ground-truth optimal solution.
2. **Backtracking Count (B)**: The number of locked doors on the optimal path that necessitate detours to find corresponding keys.
3. **Noise Ratio (N)**: The proportion of distracting (irrelevant) facts relative to supporting (relevant) facts in the problem description.
This dataset (`seqBench_compact.jsonl.gz`) contains **7079 instances**, sampled to provide broad coverage across these complexity dimensions.
Each instance provides:
- `instance_id`: A unique identifier for the specific problem variant.
- `context`: The natural language problem description presented to the model.
- `completion`: The ground-truth sequence of actions representing the optimal solution.
- `complexity_parameters`: A dictionary containing the specific L, B, and N values for the instance.
- `instance_metadata`: Additional information, including maze dimensions and agent/target names.
- `structural_details`: A JSON string detailing the underlying base maze configuration. This includes room coordinate mappings, adjacency lists, door/key states, and all canonical facts (before noise application).
## Dataset Structure and Schema
The dataset is provided in gzipped JSONL format (`seqBench_compact.jsonl.gz`). Each line is a JSON object representing a single problem instance with the following fields:
* **`instance_id`** (`string`): Unique identifier for the problem instance.
* **`context`** (`string`): Textual problem description.
* **`completion`** (`string`): Expected sequence of actions (e.g., ` "['action1: param1', 'action2: param2', ...]" `).
* **`complexity_parameters`** (`object`):
* `logical_depth_L` (`int64`): Logical Depth (L).
* `backtracking_count_B` (`int64`): Backtracking Count (B).
* `noise_ratio_N` (`float64`): Applied Noise Ratio (N).
* **`instance_metadata`** (`object`):
* `maze_rows` (`int64`): Number of rows in the maze grid.
* `maze_cols` (`int64`): Number of columns in the maze grid.
* `agent_name` (`string`): Agent's name.
* `target_name` (`string`): Target/victim's name.
* **`structural_details`** (`string`): A JSON string containing:
* `mappings` (`object`):
* `coordinate_to_name` (`object`): Maps coordinate strings (e.g., "3,6") to original room identifiers (e.g., "D5").
* `structure` (`object`):
* `adjacency_list` (`object`): Maps coordinate strings to a list of directly connected coordinate strings.
* `door_details` (`object`): Maps a door identifier string (lexicographically sorted coordinate strings joined by "_", e.g., "3,6_3,7") to an object: `{"status": "open" | "closed and locked", "key_id": "string"}`.
* `key_locations` (`object`): Maps a `key_id` string to the coordinate string of the room containing the key.
* `start_room_coord` (`string`): Coordinate string of the agent's starting room.
* `end_room_coord` (`string`): Coordinate string of the victim's room.
* `canonical_facts` (`list`): A list of objects, where each object represents a true factual statement about the base maze (before noise/shuffling). Each fact object has: `{"type": "string", "args": list_of_strings, "supporting": boolean}`. The `args` are specific to the `type` (e.g., for "connected_rooms", args might be `["coord1_str", "coord2_str", "status_str"]`).
A machine-readable metadata file (`croissant.json`) is included in the metadata/ directory of the main repository to facilitate dataset discovery and integration.
## Using `structural_details`
The `structural_details` field offers a ground-truth representation of the maze.
```python
import gzip
import json
# Example: Load the first instance and inspect its structural_details
file_path = "seqBench_compact.jsonl.gz" # Path to your dataset file
instance_data = None
try:
with gzip.open(file_path, "rt", encoding="utf-8") as f:
first_line = f.readline()
if first_line:
instance_data = json.loads(first_line)
except FileNotFoundError:
print(f"Error: Dataset file not found at {file_path}")
except Exception as e:
print(f"Error loading dataset: {e}")
if instance_data:
print(f"Instance ID: {instance_data.get('instance_id', 'N/A')}")
# Parse the structural_details string
structural_details_str = instance_data.get("structural_details")
if structural_details_str:
structural_details = json.loads(structural_details_str)
structure = structural_details.get("structure", {})
start_coord_str = structure.get("start_room_coord")
print(f"Start Room Coordinate String: {start_coord_str}")
# Example: Door details for a hypothetical door
# Note: Door keys are formed by sorting coordinate strings and joining with '_'
coord1_str, coord2_str = "3,6", "3,7" # Replace with actual coordinates you want to check
door_dict_key = "_".join(sorted([coord1_str, coord2_str]))
door_info = structure.get("door_details", {}).get(door_dict_key)
if door_info:
print(f"Door info for {door_dict_key}: {door_info}")
else:
print(f"No direct door entry for {door_dict_key} (may not exist or names are different).")
print(f"Key locations: {structure.get('key_locations', {})}")
# print("First canonical fact:", structural_details.get("canonical_facts", [{}])[0])
else:
print("structural_details field is missing or empty.")
# For a deeper understanding of the data generation pipeline and semantics,
# refer to the scripts (`maze.py`, `maze_loader.py`, `rooms.py`)
# available in the main project repository.
```
## Dataset Statistics (for `seqBench_compact.jsonl.gz`)
* **Total Instances:** 7079
* **Logical Depth (L):**
* Range: [3, 774]
* Distribution: Instances span a wide range of L values. For L-bins of size 5 (e.g., L0-4, L5-9, etc.), there are typically 30-80 instances per bin in the lower to mid L-ranges.
* **Backtracking Count (B):**
* Range: [0, 6]
* Distribution:
* B = 0: 441 instances
* B = 1: 438 instances
* B = 2: 565 instances
* B = 3: 790 instances
* B = 4: 1046 instances
* B = 5: 1601 instances
* B = 6: 2198 instances
* **Noise Ratio (N):**
* Range: [0.0, 0.2, 0.4, 0.6, 0.8, 1.0]
* Distribution: Instances are approximately evenly distributed across the 6 noise levels, each with roughly 1180 instances.
* **Combined Complexity:** The dataset is sampled to ensure coverage across (B, N) combinations (typically 60-380 instances per pair) and (L-bin, N) combinations (aiming for approximately 10 instances per L-bin of size 5 for each N, varying with the natural distribution of L).
## Generation Process
The benchmark instances are generated through a multi-stage process:
1. **Base Maze Generation**: Acyclic maze graphs are programmatically created on N x M grids.
2. **Rewind Construction**: A target number of backtracking maneuvers (B_target) are embedded by working backward from a goal room, strategically placing keys and locked doors.
3. **NLP Formulation**: For each base maze configuration, a list of canonical facts describing the environment and task is derived.
4. **Noise Application**: A specified `noise_ratio_N` is used to select a proportion of distracting (irrelevant) facts to include alongside supporting (relevant) facts, forming the final `context`.
## Citation
Please cite this work as:
```bibtex
@misc{anonymous2025seqbench,
author = {Anonymous Submission},
title = {SeqBench: A Tunable Benchmark to Quantify Sequential Reasoning Limits of LLMs},
year = {2025},
publisher = {Proceedings of the Conference on Empirical Methods in Natural Language Processing},
note = {Special Theme: Interdisciplinary Recontextualization of NLP},
comment = {Dataset accessible at https://huggingface.co/datasets/emnlp-submission/seqBench}
}
``` |