Update README.md
Browse files
README.md
CHANGED
@@ -11,18 +11,7 @@ short_description: Solves VRP with Transformer & RL. Compare with Google OR-Too
|
|
11 |
π Vehicle Routing Problem Solver with Transformer-based Reinforcement Learning
|
12 |
This project implements a deep reinforcement learning framework to solve the Vehicle Routing Problem with Time Windows (VRPTW) using Transformer-based models. It also integrates Google OR-Tools as a classical baseline for comparison.
|
13 |
|
14 |
-
π Project Highlights
|
15 |
-
βοΈ Transformer-based Actor-Critic architecture
|
16 |
|
17 |
-
π§ Reinforcement Learning (Policy Gradient with Baseline)
|
18 |
-
|
19 |
-
π°οΈ Google OR-Tools as Benchmark
|
20 |
-
|
21 |
-
π§ͺ Compatible with custom and Shanghai-like datasets
|
22 |
-
|
23 |
-
π Supports beam search, nearest-neighbor heuristics, and greedy decoding
|
24 |
-
|
25 |
-
π¦ Designed to run on Hugging Face Spaces (Docker SDK)
|
26 |
|
27 |
π Project Structure
|
28 |
bash
|
@@ -37,9 +26,11 @@ Edit
|
|
37 |
βββ dataloader.py # Custom dataset handling (VRP with time windows)
|
38 |
βββ run.py # Training pipeline
|
39 |
βββ params.json # Hyperparameters and config
|
40 |
-
|
41 |
-
|
|
|
42 |
π§ Model Description
|
|
|
43 |
This model is inspired by the paper
|
44 |
βAttention, Learn to Solve Routing Problems!β
|
45 |
(Bello et al., 2018 - arXiv:1803.08475)
|
@@ -54,63 +45,3 @@ An actor-critic reinforcement learning strategy with a learnable baseline
|
|
54 |
|
55 |
Beam Search and Greedy decoding options
|
56 |
|
57 |
-
π¦ Requirements
|
58 |
-
The environment is automatically built using the included Dockerfile on Hugging Face Spaces.
|
59 |
-
However, if you want to run it locally, install:
|
60 |
-
|
61 |
-
bash
|
62 |
-
Copy
|
63 |
-
Edit
|
64 |
-
pip install torch ortools numpy matplotlib
|
65 |
-
βοΈ Configuration (params.json)
|
66 |
-
Update the params.json file to configure:
|
67 |
-
|
68 |
-
json
|
69 |
-
Copy
|
70 |
-
Edit
|
71 |
-
{
|
72 |
-
"device": "cpu",
|
73 |
-
"run_tests": true,
|
74 |
-
"save_results": true,
|
75 |
-
"dataset_path": "",
|
76 |
-
"train_dataset_size": 1000,
|
77 |
-
"validation_dataset_size": 100,
|
78 |
-
"num_nodes": 20,
|
79 |
-
"num_depots": 1,
|
80 |
-
"embedding_size": 128,
|
81 |
-
"sample_size": 3,
|
82 |
-
"num_epochs": 50
|
83 |
-
}
|
84 |
-
π Run the Training Pipeline
|
85 |
-
If you're using Hugging Face Spaces, training begins automatically.
|
86 |
-
|
87 |
-
To run locally:
|
88 |
-
|
89 |
-
bash
|
90 |
-
Copy
|
91 |
-
Edit
|
92 |
-
python run.py
|
93 |
-
π§ͺ Evaluation
|
94 |
-
The model is evaluated against:
|
95 |
-
|
96 |
-
Google OR-Tools (via google_solver/)
|
97 |
-
|
98 |
-
Nearest neighbor baseline
|
99 |
-
|
100 |
-
Greedy decoding
|
101 |
-
|
102 |
-
Metrics include total travel time and ratios vs. baseline
|
103 |
-
|
104 |
-
π Example Output
|
105 |
-
text
|
106 |
-
Copy
|
107 |
-
Edit
|
108 |
-
Epoch: 0, Batch: 0, Actor/NN: 1.1420, Actor/Baseline: 0.9934
|
109 |
-
Test Results: Actor/Google: 1.032, Actor/NN: 0.951, Best NN Ratio: 0.912
|
110 |
-
π References
|
111 |
-
Bello et al. βAttention, Learn to Solve Routing Problems!β arXiv:1803.08475
|
112 |
-
|
113 |
-
OR-Tools by Google: https://developers.google.com/optimization/
|
114 |
-
|
115 |
-
π License
|
116 |
-
This project is released under the MIT License.
|
|
|
11 |
π Vehicle Routing Problem Solver with Transformer-based Reinforcement Learning
|
12 |
This project implements a deep reinforcement learning framework to solve the Vehicle Routing Problem with Time Windows (VRPTW) using Transformer-based models. It also integrates Google OR-Tools as a classical baseline for comparison.
|
13 |
|
|
|
|
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
|
16 |
π Project Structure
|
17 |
bash
|
|
|
26 |
βββ dataloader.py # Custom dataset handling (VRP with time windows)
|
27 |
βββ run.py # Training pipeline
|
28 |
βββ params.json # Hyperparameters and config
|
29 |
+
βββ README.md # This file
|
30 |
+
|
31 |
+
|
32 |
π§ Model Description
|
33 |
+
|
34 |
This model is inspired by the paper
|
35 |
βAttention, Learn to Solve Routing Problems!β
|
36 |
(Bello et al., 2018 - arXiv:1803.08475)
|
|
|
45 |
|
46 |
Beam Search and Greedy decoding options
|
47 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|