SpencerCPurdy commited on
Commit
ace4358
·
verified ·
1 Parent(s): 98682ce

Create app.py

Browse files
Files changed (1) hide show
  1. app.py +2143 -0
app.py ADDED
@@ -0,0 +1,2143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Cross-Asset Arbitrage Engine with Transformer Models
3
+ Author: Spencer Purdy
4
+ Description: A sophisticated arbitrage engine leveraging transformer models for price forecasting
5
+ across multiple asset classes and venues. Integrates CEX/DEX venues with latency-aware
6
+ execution, LLM-driven optimization, and comprehensive risk analytics.
7
+ """
8
+
9
+ # Install required packages
10
+ # !pip install -q transformers torch numpy pandas scikit-learn plotly gradio scipy statsmodels networkx
11
+
12
+ # Core imports
13
+ import numpy as np
14
+ import pandas as pd
15
+ import torch
16
+ import torch.nn as nn
17
+ import torch.optim as optim
18
+ from datetime import datetime, timedelta
19
+ import gradio as gr
20
+ import plotly.graph_objects as go
21
+ import plotly.express as px
22
+ from plotly.subplots import make_subplots
23
+ import json
24
+ import random
25
+ from typing import Dict, List, Tuple, Optional, Any, Union
26
+ from dataclasses import dataclass, field
27
+ from collections import defaultdict, deque
28
+ import warnings
29
+ warnings.filterwarnings('ignore')
30
+
31
+ # Additional imports
32
+ from scipy import stats
33
+ from scipy.optimize import minimize
34
+ from sklearn.preprocessing import StandardScaler
35
+ import networkx as nx
36
+
37
+ # Set random seeds for reproducibility
38
+ np.random.seed(42)
39
+ torch.manual_seed(42)
40
+ random.seed(42)
41
+
42
+ # Configuration constants
43
+ TRADING_DAYS_PER_YEAR = 365 # Crypto markets trade 24/7
44
+ RISK_FREE_RATE = 0.045 # Current risk-free rate
45
+ TRANSACTION_COST_CEX = 0.001 # 0.1% for centralized exchanges
46
+ TRANSACTION_COST_DEX = 0.003 # 0.3% for decentralized exchanges
47
+ GAS_COST_USD = 5.0 # Average gas cost for DEX transactions
48
+ MIN_PROFIT_THRESHOLD = 0.002 # 0.2% minimum profit after costs
49
+ MAX_POSITION_SIZE = 100000 # Maximum position size in USD
50
+ LATENCY_CEX_MS = 50 # Average CEX latency in milliseconds
51
+ LATENCY_DEX_MS = 1000 # Average DEX latency (block time)
52
+
53
+ # Asset configuration
54
+ ASSET_CLASSES = {
55
+ 'crypto_spot': ['BTC', 'ETH', 'SOL', 'MATIC', 'AVAX'],
56
+ 'crypto_futures': ['BTC-PERP', 'ETH-PERP', 'SOL-PERP'],
57
+ 'fx_pairs': ['EUR/USD', 'GBP/USD', 'USD/JPY', 'AUD/USD'],
58
+ 'equity_etfs': ['SPY', 'QQQ', 'IWM', 'EFA', 'EEM']
59
+ }
60
+
61
+ # Exchange configuration
62
+ EXCHANGES = {
63
+ 'cex': ['Binance', 'Coinbase', 'Kraken', 'FTX'],
64
+ 'dex': ['Uniswap_V3', 'SushiSwap', 'Curve', 'Balancer']
65
+ }
66
+
67
+ @dataclass
68
+ class OrderBook:
69
+ """Order book data structure for managing bid/ask levels"""
70
+ exchange: str
71
+ asset: str
72
+ timestamp: datetime
73
+ bids: List[Tuple[float, float]] # List of (price, size) tuples
74
+ asks: List[Tuple[float, float]] # List of (price, size) tuples
75
+
76
+ def get_best_bid(self) -> Tuple[float, float]:
77
+ """Get best bid price and size"""
78
+ return self.bids[0] if self.bids else (0.0, 0.0)
79
+
80
+ def get_best_ask(self) -> Tuple[float, float]:
81
+ """Get best ask price and size"""
82
+ return self.asks[0] if self.asks else (float('inf'), 0.0)
83
+
84
+ def get_mid_price(self) -> float:
85
+ """Calculate mid price from best bid and ask"""
86
+ bid, _ = self.get_best_bid()
87
+ ask, _ = self.get_best_ask()
88
+ return (bid + ask) / 2 if bid > 0 and ask < float('inf') else 0.0
89
+
90
+ @dataclass
91
+ class ArbitrageOpportunity:
92
+ """Data structure for storing identified arbitrage opportunities"""
93
+ opportunity_id: str
94
+ asset: str
95
+ buy_exchange: str
96
+ sell_exchange: str
97
+ buy_price: float
98
+ sell_price: float
99
+ max_size: float
100
+ expected_profit: float
101
+ expected_profit_pct: float
102
+ latency_risk: float
103
+ timestamp: datetime
104
+ metadata: Dict[str, Any] = field(default_factory=dict)
105
+
106
+ @dataclass
107
+ class ExecutionResult:
108
+ """Data structure for storing execution results"""
109
+ opportunity_id: str
110
+ success: bool
111
+ executed_size: float
112
+ buy_fill_price: float
113
+ sell_fill_price: float
114
+ realized_profit: float
115
+ slippage: float
116
+ latency_ms: float
117
+ gas_cost: float
118
+ timestamp: datetime
119
+
120
+ class NumericalTransformer(nn.Module):
121
+ """Transformer architecture adapted for numerical price prediction with training capability"""
122
+
123
+ def __init__(self, input_dim: int = 10, hidden_dim: int = 128,
124
+ num_heads: int = 8, num_layers: int = 4,
125
+ prediction_horizon: int = 5):
126
+ super().__init__()
127
+
128
+ self.input_dim = input_dim
129
+ self.hidden_dim = hidden_dim
130
+ self.prediction_horizon = prediction_horizon
131
+
132
+ # Input projection layer
133
+ self.input_projection = nn.Linear(input_dim, hidden_dim)
134
+
135
+ # Positional encoding for sequence position information
136
+ self.positional_encoding = self._create_positional_encoding(1000, hidden_dim)
137
+
138
+ # Transformer encoder for feature extraction
139
+ encoder_layer = nn.TransformerEncoderLayer(
140
+ d_model=hidden_dim,
141
+ nhead=num_heads,
142
+ dim_feedforward=hidden_dim * 4,
143
+ dropout=0.1,
144
+ activation='gelu',
145
+ batch_first=True
146
+ )
147
+ self.transformer = nn.TransformerEncoder(encoder_layer, num_layers=num_layers)
148
+
149
+ # Output heads for different prediction targets
150
+ self.price_head = nn.Linear(hidden_dim, prediction_horizon)
151
+ self.volatility_head = nn.Linear(hidden_dim, prediction_horizon)
152
+ self.volume_head = nn.Linear(hidden_dim, prediction_horizon)
153
+ self.uncertainty_head = nn.Linear(hidden_dim, prediction_horizon)
154
+
155
+ # Initialize weights
156
+ self._init_weights()
157
+
158
+ def _init_weights(self):
159
+ """Initialize model weights using Xavier initialization"""
160
+ for module in self.modules():
161
+ if isinstance(module, nn.Linear):
162
+ nn.init.xavier_uniform_(module.weight)
163
+ if module.bias is not None:
164
+ nn.init.zeros_(module.bias)
165
+
166
+ def _create_positional_encoding(self, max_len: int, d_model: int) -> nn.Parameter:
167
+ """Create sinusoidal positional encoding"""
168
+ pe = torch.zeros(max_len, d_model)
169
+ position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
170
+ div_term = torch.exp(torch.arange(0, d_model, 2).float() *
171
+ (-np.log(10000.0) / d_model))
172
+
173
+ pe[:, 0::2] = torch.sin(position * div_term)
174
+ pe[:, 1::2] = torch.cos(position * div_term)
175
+
176
+ return nn.Parameter(pe.unsqueeze(0), requires_grad=False)
177
+
178
+ def forward(self, x: torch.Tensor, mask: Optional[torch.Tensor] = None) -> Dict[str, torch.Tensor]:
179
+ """
180
+ Forward pass through the transformer
181
+ Args:
182
+ x: Input tensor of shape (batch_size, seq_len, input_dim)
183
+ mask: Optional attention mask
184
+ Returns:
185
+ Dictionary containing price, volatility, volume predictions and uncertainty
186
+ """
187
+ batch_size, seq_len, _ = x.shape
188
+
189
+ # Project input to hidden dimension
190
+ x = self.input_projection(x)
191
+
192
+ # Add positional encoding
193
+ x = x + self.positional_encoding[:, :seq_len, :]
194
+
195
+ # Pass through transformer encoder
196
+ encoded = self.transformer(x, src_key_padding_mask=mask)
197
+
198
+ # Use last sequence position for prediction
199
+ last_hidden = encoded[:, -1, :]
200
+
201
+ # Generate predictions from specialized heads
202
+ price_pred = self.price_head(last_hidden)
203
+ volatility_pred = torch.sigmoid(self.volatility_head(last_hidden)) * 0.1 # Scale to [0, 0.1]
204
+ volume_pred = torch.exp(self.volume_head(last_hidden)) # Ensure positive
205
+ uncertainty = torch.sigmoid(self.uncertainty_head(last_hidden))
206
+
207
+ return {
208
+ 'price': price_pred,
209
+ 'volatility': volatility_pred,
210
+ 'volume': volume_pred,
211
+ 'uncertainty': uncertainty
212
+ }
213
+
214
+ class PriceForecastingEngine:
215
+ """Engine for multi-asset price forecasting using transformers with actual training"""
216
+
217
+ def __init__(self):
218
+ self.models = {} # One model per asset class
219
+ self.scalers = {} # Feature scalers
220
+ self.training_history = defaultdict(list)
221
+ self.is_trained = defaultdict(bool)
222
+
223
+ # Initialize models for each asset class
224
+ for asset_class in ASSET_CLASSES.keys():
225
+ self.models[asset_class] = NumericalTransformer()
226
+ self.scalers[asset_class] = StandardScaler()
227
+ self.is_trained[asset_class] = False
228
+
229
+ def prepare_features(self, price_data: pd.DataFrame) -> np.ndarray:
230
+ """Prepare features for transformer input"""
231
+ features = []
232
+
233
+ # Price features
234
+ features.append(price_data['close'].values)
235
+ features.append(price_data['high'].values)
236
+ features.append(price_data['low'].values)
237
+
238
+ # Volume features (log-transformed)
239
+ features.append(np.log1p(price_data['volume'].values))
240
+
241
+ # Technical indicators
242
+ returns = price_data['close'].pct_change().fillna(0)
243
+ features.append(returns.values)
244
+
245
+ # Moving averages
246
+ ma_7 = price_data['close'].rolling(7).mean().fillna(method='bfill')
247
+ ma_21 = price_data['close'].rolling(21).mean().fillna(method='bfill')
248
+ features.append((price_data['close'] / ma_7 - 1).values)
249
+ features.append((price_data['close'] / ma_21 - 1).values)
250
+
251
+ # Volatility (rolling standard deviation)
252
+ volatility = returns.rolling(20).std().fillna(method='bfill')
253
+ features.append(volatility.values)
254
+
255
+ # RSI (Relative Strength Index)
256
+ rsi = self.calculate_rsi(price_data['close'])
257
+ features.append(rsi.values / 100) # Normalize to [0, 1]
258
+
259
+ # Order flow imbalance (simulated for this example)
260
+ ofi = np.random.normal(0, 0.1, len(price_data))
261
+ features.append(ofi)
262
+
263
+ # Stack features into matrix
264
+ feature_matrix = np.column_stack(features)
265
+
266
+ return feature_matrix
267
+
268
+ def calculate_rsi(self, prices: pd.Series, period: int = 14) -> pd.Series:
269
+ """Calculate Relative Strength Index"""
270
+ delta = prices.diff()
271
+ gain = (delta.where(delta > 0, 0)).rolling(window=period).mean()
272
+ loss = (-delta.where(delta < 0, 0)).rolling(window=period).mean()
273
+
274
+ rs = gain / loss
275
+ rsi = 100 - (100 / (1 + rs))
276
+
277
+ return rsi.fillna(50)
278
+
279
+ def create_sequences(self, features: np.ndarray, targets: np.ndarray,
280
+ seq_len: int = 50, horizon: int = 5) -> Tuple[np.ndarray, np.ndarray]:
281
+ """Create sequences for training the transformer"""
282
+ sequences = []
283
+ target_sequences = []
284
+
285
+ for i in range(seq_len, len(features) - horizon):
286
+ sequences.append(features[i-seq_len:i])
287
+ target_sequences.append(targets[i:i+horizon])
288
+
289
+ return np.array(sequences), np.array(target_sequences)
290
+
291
+ def train_model(self, asset_class: str, price_data: pd.DataFrame,
292
+ epochs: int = 50, batch_size: int = 32):
293
+ """Train the transformer model on historical data"""
294
+ if len(price_data) < 100:
295
+ return # Not enough data to train
296
+
297
+ # Prepare features
298
+ features = self.prepare_features(price_data)
299
+
300
+ # Scale features
301
+ features_scaled = self.scalers[asset_class].fit_transform(features)
302
+
303
+ # Prepare targets (future returns)
304
+ returns = price_data['close'].pct_change().fillna(0).values
305
+
306
+ # Create sequences
307
+ X, y = self.create_sequences(features_scaled, returns)
308
+
309
+ if len(X) == 0:
310
+ return # Not enough sequences
311
+
312
+ # Convert to tensors
313
+ X_tensor = torch.FloatTensor(X)
314
+ y_tensor = torch.FloatTensor(y)
315
+
316
+ # Create data loader
317
+ dataset = torch.utils.data.TensorDataset(X_tensor, y_tensor)
318
+ loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True)
319
+
320
+ # Setup training
321
+ model = self.models[asset_class]
322
+ optimizer = optim.Adam(model.parameters(), lr=0.001)
323
+ criterion = nn.MSELoss()
324
+
325
+ # Training loop
326
+ model.train()
327
+ for epoch in range(epochs):
328
+ epoch_loss = 0.0
329
+
330
+ for batch_X, batch_y in loader:
331
+ optimizer.zero_grad()
332
+
333
+ # Forward pass
334
+ predictions = model(batch_X)
335
+
336
+ # Calculate loss (using price predictions)
337
+ loss = criterion(predictions['price'], batch_y)
338
+
339
+ # Backward pass
340
+ loss.backward()
341
+ optimizer.step()
342
+
343
+ epoch_loss += loss.item()
344
+
345
+ # Record training history
346
+ avg_loss = epoch_loss / len(loader)
347
+ self.training_history[asset_class].append(avg_loss)
348
+
349
+ # Mark as trained
350
+ self.is_trained[asset_class] = True
351
+ model.eval()
352
+
353
+ def forecast_prices(self, asset: str, price_history: pd.DataFrame,
354
+ horizon: int = 5) -> Dict[str, np.ndarray]:
355
+ """Generate price forecasts using transformer model"""
356
+
357
+ # Determine asset class
358
+ asset_class = self._get_asset_class(asset)
359
+ if not asset_class:
360
+ return self._generate_random_forecast(horizon)
361
+
362
+ # Train model if not already trained
363
+ if not self.is_trained[asset_class] and len(price_history) > 100:
364
+ self.train_model(asset_class, price_history)
365
+
366
+ # Prepare features
367
+ features = self.prepare_features(price_history)
368
+
369
+ # Scale features
370
+ features_scaled = self.scalers[asset_class].fit_transform(features)
371
+
372
+ # Create sequences
373
+ seq_len = min(50, len(features_scaled))
374
+ if len(features_scaled) < seq_len:
375
+ # Pad if necessary
376
+ padding = seq_len - len(features_scaled)
377
+ features_scaled = np.vstack([
378
+ np.zeros((padding, features_scaled.shape[1])),
379
+ features_scaled
380
+ ])
381
+
382
+ # Get recent sequence
383
+ sequence = features_scaled[-seq_len:].reshape(1, seq_len, -1)
384
+ sequence_tensor = torch.FloatTensor(sequence)
385
+
386
+ # Generate forecast
387
+ model = self.models[asset_class]
388
+ model.eval()
389
+
390
+ with torch.no_grad():
391
+ predictions = model(sequence_tensor)
392
+
393
+ # Extract predictions
394
+ current_price = price_history['close'].iloc[-1]
395
+
396
+ # Convert relative predictions to absolute prices
397
+ price_changes = predictions['price'].numpy()[0]
398
+ price_forecast = current_price * (1 + price_changes * 0.01) # Scale predictions
399
+
400
+ return {
401
+ 'price': price_forecast,
402
+ 'volatility': predictions['volatility'].numpy()[0],
403
+ 'volume': predictions['volume'].numpy()[0],
404
+ 'uncertainty': predictions['uncertainty'].numpy()[0]
405
+ }
406
+
407
+ def _get_asset_class(self, asset: str) -> Optional[str]:
408
+ """Determine asset class for a given asset"""
409
+ for asset_class, assets in ASSET_CLASSES.items():
410
+ if asset in assets or any(asset.startswith(a) for a in assets):
411
+ return asset_class
412
+ return None
413
+
414
+ def _generate_random_forecast(self, horizon: int) -> Dict[str, np.ndarray]:
415
+ """Generate random forecast as fallback"""
416
+ return {
417
+ 'price': np.random.normal(100, 2, horizon),
418
+ 'volatility': np.random.uniform(0.01, 0.05, horizon),
419
+ 'volume': np.random.lognormal(15, 0.5, horizon),
420
+ 'uncertainty': np.random.uniform(0.3, 0.7, horizon)
421
+ }
422
+
423
+ class ExchangeSimulator:
424
+ """Simulate exchange order books and execution with realistic market dynamics"""
425
+
426
+ def __init__(self, exchange_type: str = 'cex'):
427
+ self.exchange_type = exchange_type
428
+ self.order_books = {}
429
+ self.latency_ms = LATENCY_CEX_MS if exchange_type == 'cex' else LATENCY_DEX_MS
430
+ self.transaction_cost = TRANSACTION_COST_CEX if exchange_type == 'cex' else TRANSACTION_COST_DEX
431
+
432
+ def generate_order_book(self, asset: str, mid_price: float,
433
+ spread_bps: float = 10, market_conditions: Dict[str, Any] = None) -> OrderBook:
434
+ """Generate realistic order book with dynamic spread based on market conditions"""
435
+
436
+ # Adjust spread based on market conditions
437
+ if market_conditions:
438
+ volatility = market_conditions.get('volatility', 0.02)
439
+ liquidity = market_conditions.get('liquidity', 1.0)
440
+ spread_bps *= (1 + volatility * 10) * (2 - liquidity)
441
+
442
+ spread = mid_price * spread_bps / 10000
443
+
444
+ # Generate bid/ask levels with realistic depth
445
+ n_levels = 10
446
+ bids = []
447
+ asks = []
448
+
449
+ for i in range(n_levels):
450
+ # Bid side
451
+ bid_price = mid_price - spread/2 - i * spread/10
452
+ bid_size = np.random.lognormal(10, 1) * (n_levels - i) / n_levels
453
+ bids.append((bid_price, bid_size))
454
+
455
+ # Ask side
456
+ ask_price = mid_price + spread/2 + i * spread/10
457
+ ask_size = np.random.lognormal(10, 1) * (n_levels - i) / n_levels
458
+ asks.append((ask_price, ask_size))
459
+
460
+ return OrderBook(
461
+ exchange=self.exchange_type,
462
+ asset=asset,
463
+ timestamp=datetime.now(),
464
+ bids=bids,
465
+ asks=asks
466
+ )
467
+
468
+ def simulate_market_impact(self, size: float, liquidity: float) -> float:
469
+ """Calculate market impact using square-root model"""
470
+ # Almgren-Chriss square-root market impact model
471
+ impact_bps = 10 * np.sqrt(size / liquidity)
472
+ return impact_bps / 10000
473
+
474
+ def execute_order(self, order_book: OrderBook, side: str,
475
+ size: float) -> Tuple[float, float]:
476
+ """
477
+ Simulate order execution with realistic slippage
478
+ Returns: (fill_price, actual_size)
479
+ """
480
+
481
+ filled_size = 0
482
+ total_cost = 0
483
+
484
+ if side == 'buy':
485
+ # Execute against asks
486
+ for ask_price, ask_size in order_book.asks:
487
+ if filled_size >= size:
488
+ break
489
+
490
+ fill_amount = min(size - filled_size, ask_size)
491
+ filled_size += fill_amount
492
+ total_cost += fill_amount * ask_price
493
+
494
+ else: # sell
495
+ # Execute against bids
496
+ for bid_price, bid_size in order_book.bids:
497
+ if filled_size >= size:
498
+ break
499
+
500
+ fill_amount = min(size - filled_size, bid_size)
501
+ filled_size += fill_amount
502
+ total_cost += fill_amount * bid_price
503
+
504
+ # Calculate average fill price
505
+ avg_fill_price = total_cost / filled_size if filled_size > 0 else 0
506
+
507
+ # Add market impact
508
+ liquidity = sum(s for _, s in order_book.bids) + sum(s for _, s in order_book.asks)
509
+ impact = self.simulate_market_impact(filled_size, liquidity)
510
+
511
+ if side == 'buy':
512
+ avg_fill_price *= (1 + impact)
513
+ else:
514
+ avg_fill_price *= (1 - impact)
515
+
516
+ return avg_fill_price, filled_size
517
+
518
+ class ArbitrageDetector:
519
+ """Detect arbitrage opportunities across venues with advanced filtering"""
520
+
521
+ def __init__(self):
522
+ self.opportunity_history = []
523
+ self.min_profit_threshold = MIN_PROFIT_THRESHOLD
524
+
525
+ def find_opportunities(self, order_books: Dict[str, Dict[str, OrderBook]],
526
+ transaction_costs: Dict[str, float],
527
+ forecasts: Dict[str, Dict[str, np.ndarray]] = None) -> List[ArbitrageOpportunity]:
528
+ """Find arbitrage opportunities across all venues with forecast integration"""
529
+
530
+ opportunities = []
531
+
532
+ # Check each asset
533
+ for asset in self._get_all_assets(order_books):
534
+ asset_books = self._get_asset_order_books(order_books, asset)
535
+
536
+ if len(asset_books) < 2:
537
+ continue
538
+
539
+ # Find best bid and ask across all exchanges
540
+ best_bid_exchange, best_bid_price, best_bid_size = self._find_best_bid(asset_books)
541
+ best_ask_exchange, best_ask_price, best_ask_size = self._find_best_ask(asset_books)
542
+
543
+ if best_bid_price > best_ask_price:
544
+ # Calculate potential profit
545
+ max_size = min(best_bid_size, best_ask_size,
546
+ MAX_POSITION_SIZE / best_ask_price)
547
+
548
+ # Calculate costs
549
+ buy_cost = transaction_costs.get(best_ask_exchange, TRANSACTION_COST_CEX)
550
+ sell_cost = transaction_costs.get(best_bid_exchange, TRANSACTION_COST_CEX)
551
+
552
+ # Add gas cost for DEX
553
+ gas_cost = 0
554
+ if 'dex' in best_ask_exchange.lower() or 'dex' in best_bid_exchange.lower():
555
+ gas_cost = GAS_COST_USD
556
+
557
+ # Calculate profit
558
+ gross_profit = (best_bid_price - best_ask_price) * max_size
559
+ total_cost = (buy_cost + sell_cost) * best_ask_price * max_size + gas_cost
560
+ net_profit = gross_profit - total_cost
561
+ profit_pct = net_profit / (best_ask_price * max_size) if max_size > 0 else 0
562
+
563
+ # Adjust for price forecast if available
564
+ if forecasts and asset in forecasts:
565
+ price_forecast = forecasts[asset]['price'][0]
566
+ forecast_adjustment = (price_forecast - best_ask_price) / best_ask_price
567
+ profit_pct += forecast_adjustment * 0.5 # Weight forecast impact
568
+
569
+ if profit_pct > self.min_profit_threshold:
570
+ opportunity = ArbitrageOpportunity(
571
+ opportunity_id=f"{asset}_{datetime.now().strftime('%Y%m%d_%H%M%S%f')}",
572
+ asset=asset,
573
+ buy_exchange=best_ask_exchange,
574
+ sell_exchange=best_bid_exchange,
575
+ buy_price=best_ask_price,
576
+ sell_price=best_bid_price,
577
+ max_size=max_size,
578
+ expected_profit=net_profit,
579
+ expected_profit_pct=profit_pct,
580
+ latency_risk=self._calculate_latency_risk(
581
+ best_ask_exchange, best_bid_exchange
582
+ ),
583
+ timestamp=datetime.now(),
584
+ metadata={
585
+ 'spread': best_bid_price - best_ask_price,
586
+ 'gas_cost': gas_cost,
587
+ 'transaction_costs': buy_cost + sell_cost,
588
+ 'forecast_impact': forecast_adjustment if forecasts and asset in forecasts else 0
589
+ }
590
+ )
591
+
592
+ opportunities.append(opportunity)
593
+ self.opportunity_history.append(opportunity)
594
+
595
+ return opportunities
596
+
597
+ def _get_all_assets(self, order_books: Dict[str, Dict[str, OrderBook]]) -> set:
598
+ """Get all unique assets across exchanges"""
599
+ assets = set()
600
+ for exchange_books in order_books.values():
601
+ assets.update(exchange_books.keys())
602
+ return assets
603
+
604
+ def _get_asset_order_books(self, order_books: Dict[str, Dict[str, OrderBook]],
605
+ asset: str) -> Dict[str, OrderBook]:
606
+ """Get order books for specific asset across exchanges"""
607
+ asset_books = {}
608
+ for exchange, books in order_books.items():
609
+ if asset in books:
610
+ asset_books[exchange] = books[asset]
611
+ return asset_books
612
+
613
+ def _find_best_bid(self, asset_books: Dict[str, OrderBook]) -> Tuple[str, float, float]:
614
+ """Find best bid across exchanges"""
615
+ best_exchange = None
616
+ best_price = 0
617
+ best_size = 0
618
+
619
+ for exchange, book in asset_books.items():
620
+ bid_price, bid_size = book.get_best_bid()
621
+ if bid_price > best_price:
622
+ best_exchange = exchange
623
+ best_price = bid_price
624
+ best_size = bid_size
625
+
626
+ return best_exchange, best_price, best_size
627
+
628
+ def _find_best_ask(self, asset_books: Dict[str, OrderBook]) -> Tuple[str, float, float]:
629
+ """Find best ask across exchanges"""
630
+ best_exchange = None
631
+ best_price = float('inf')
632
+ best_size = 0
633
+
634
+ for exchange, book in asset_books.items():
635
+ ask_price, ask_size = book.get_best_ask()
636
+ if ask_price < best_price:
637
+ best_exchange = exchange
638
+ best_price = ask_price
639
+ best_size = ask_size
640
+
641
+ return best_exchange, best_price, best_size
642
+
643
+ def _calculate_latency_risk(self, buy_exchange: str, sell_exchange: str) -> float:
644
+ """Calculate latency risk score (0-1)"""
645
+ # Higher risk for cross-exchange type arbitrage
646
+ if ('dex' in buy_exchange.lower()) != ('dex' in sell_exchange.lower()):
647
+ return 0.8 # High risk due to different settlement times
648
+ elif 'dex' in buy_exchange.lower():
649
+ return 0.6 # Medium risk for DEX-DEX
650
+ else:
651
+ return 0.3 # Lower risk for CEX-CEX
652
+
653
+ class LLMStrategyOptimizer:
654
+ """LLM-inspired strategy parameter optimization with machine learning"""
655
+
656
+ def __init__(self):
657
+ self.parameter_history = defaultdict(list)
658
+ self.performance_history = []
659
+ self.current_parameters = self._get_default_parameters()
660
+ self.optimization_model = self._build_optimization_model()
661
+
662
+ def _get_default_parameters(self) -> Dict[str, Any]:
663
+ """Get default strategy parameters"""
664
+ return {
665
+ 'min_profit_threshold': 0.002,
666
+ 'max_position_size': 100000,
667
+ 'risk_limit': 0.02, # 2% per trade
668
+ 'correlation_threshold': 0.7,
669
+ 'rebalance_frequency': 300, # seconds
670
+ 'latency_buffer': 1.5, # multiplier for latency estimates
671
+ 'confidence_threshold': 0.6,
672
+ 'max_concurrent_trades': 5
673
+ }
674
+
675
+ def _build_optimization_model(self) -> nn.Module:
676
+ """Build neural network for parameter optimization"""
677
+ class ParameterOptimizer(nn.Module):
678
+ def __init__(self):
679
+ super().__init__()
680
+ self.fc1 = nn.Linear(20, 64) # Input features
681
+ self.fc2 = nn.Linear(64, 32)
682
+ self.fc3 = nn.Linear(32, 8) # Output parameters
683
+ self.relu = nn.ReLU()
684
+ self.sigmoid = nn.Sigmoid()
685
+
686
+ def forward(self, x):
687
+ x = self.relu(self.fc1(x))
688
+ x = self.relu(self.fc2(x))
689
+ x = self.sigmoid(self.fc3(x))
690
+ return x
691
+
692
+ return ParameterOptimizer()
693
+
694
+ def generate_parameter_suggestions(self,
695
+ recent_performance: List[ExecutionResult],
696
+ market_conditions: Dict[str, Any]) -> Dict[str, Any]:
697
+ """Generate parameter adjustments using ML-based optimization"""
698
+
699
+ suggestions = self.current_parameters.copy()
700
+
701
+ if not recent_performance:
702
+ return suggestions
703
+
704
+ # Extract performance features
705
+ success_rate = sum(1 for r in recent_performance if r.success) / len(recent_performance)
706
+ avg_slippage = np.mean([r.slippage for r in recent_performance])
707
+ avg_profit = np.mean([r.realized_profit for r in recent_performance])
708
+ profit_variance = np.var([r.realized_profit for r in recent_performance])
709
+
710
+ # Create feature vector
711
+ features = [
712
+ success_rate,
713
+ avg_slippage,
714
+ avg_profit / 1000, # Normalize
715
+ profit_variance / 1000000, # Normalize
716
+ market_conditions.get('volatility', 0.02),
717
+ market_conditions.get('liquidity', 1.0),
718
+ len(recent_performance) / 100, # Normalize
719
+ self.current_parameters['min_profit_threshold'],
720
+ self.current_parameters['max_position_size'] / 1000000,
721
+ self.current_parameters['risk_limit'],
722
+ self.current_parameters['correlation_threshold'],
723
+ self.current_parameters['rebalance_frequency'] / 3600,
724
+ self.current_parameters['latency_buffer'],
725
+ self.current_parameters['confidence_threshold'],
726
+ self.current_parameters['max_concurrent_trades'] / 10,
727
+ # Additional market features
728
+ market_conditions.get('max_volatility', 0.03),
729
+ float(datetime.now().hour) / 24, # Time of day
730
+ float(datetime.now().weekday()) / 7, # Day of week
731
+ 0.5, # Placeholder for sentiment (would be real in production)
732
+ 0.5 # Placeholder for market regime (would be real in production)
733
+ ]
734
+
735
+ # Use ML model for optimization
736
+ feature_tensor = torch.FloatTensor([features])
737
+
738
+ with torch.no_grad():
739
+ adjustments = self.optimization_model(feature_tensor).numpy()[0]
740
+
741
+ # Apply ML-suggested adjustments
742
+ suggestions['min_profit_threshold'] = 0.001 + adjustments[0] * 0.009
743
+ suggestions['max_position_size'] = 10000 + adjustments[1] * 490000
744
+ suggestions['risk_limit'] = 0.005 + adjustments[2] * 0.045
745
+ suggestions['correlation_threshold'] = 0.5 + adjustments[3] * 0.4
746
+ suggestions['rebalance_frequency'] = 60 + adjustments[4] * 3540
747
+ suggestions['latency_buffer'] = 1.0 + adjustments[5] * 2.0
748
+ suggestions['confidence_threshold'] = 0.5 + adjustments[6] * 0.4
749
+ suggestions['max_concurrent_trades'] = int(1 + adjustments[7] * 9)
750
+
751
+ # Rule-based adjustments on top of ML
752
+ if success_rate < 0.7:
753
+ suggestions['min_profit_threshold'] *= 1.1
754
+ suggestions['confidence_threshold'] *= 1.05
755
+
756
+ if avg_slippage > 0.001:
757
+ suggestions['max_position_size'] *= 0.9
758
+ suggestions['latency_buffer'] *= 1.1
759
+
760
+ if avg_profit < 0:
761
+ suggestions['risk_limit'] *= 0.9
762
+ suggestions['max_concurrent_trades'] = max(1, suggestions['max_concurrent_trades'] - 1)
763
+
764
+ # Market condition adjustments
765
+ if market_conditions.get('volatility', 0) > 0.03:
766
+ suggestions['min_profit_threshold'] *= 1.2
767
+ suggestions['correlation_threshold'] *= 0.9
768
+
769
+ if market_conditions.get('liquidity', 1) < 0.5:
770
+ suggestions['max_position_size'] *= 0.7
771
+
772
+ # Ensure parameters stay within reasonable bounds
773
+ suggestions = self._apply_parameter_bounds(suggestions)
774
+
775
+ # Store suggestions
776
+ self.parameter_history['suggestions'].append({
777
+ 'timestamp': datetime.now(),
778
+ 'parameters': suggestions,
779
+ 'reasoning': self._generate_reasoning(recent_performance, market_conditions),
780
+ 'performance_metrics': {
781
+ 'success_rate': success_rate,
782
+ 'avg_slippage': avg_slippage,
783
+ 'avg_profit': avg_profit
784
+ }
785
+ })
786
+
787
+ self.current_parameters = suggestions
788
+ return suggestions
789
+
790
+ def _apply_parameter_bounds(self, parameters: Dict[str, Any]) -> Dict[str, Any]:
791
+ """Apply bounds to parameters"""
792
+ bounds = {
793
+ 'min_profit_threshold': (0.001, 0.01),
794
+ 'max_position_size': (10000, 500000),
795
+ 'risk_limit': (0.005, 0.05),
796
+ 'correlation_threshold': (0.5, 0.9),
797
+ 'rebalance_frequency': (60, 3600),
798
+ 'latency_buffer': (1.0, 3.0),
799
+ 'confidence_threshold': (0.5, 0.9),
800
+ 'max_concurrent_trades': (1, 10)
801
+ }
802
+
803
+ bounded = parameters.copy()
804
+ for param, (min_val, max_val) in bounds.items():
805
+ if param in bounded:
806
+ bounded[param] = max(min_val, min(max_val, bounded[param]))
807
+
808
+ return bounded
809
+
810
+ def _generate_reasoning(self, performance: List[ExecutionResult],
811
+ market_conditions: Dict[str, Any]) -> str:
812
+ """Generate reasoning for parameter adjustments"""
813
+
814
+ reasons = []
815
+
816
+ if performance:
817
+ success_rate = sum(1 for r in performance if r.success) / len(performance)
818
+ if success_rate < 0.7:
819
+ reasons.append("Low success rate detected - increasing selectivity")
820
+
821
+ avg_slippage = np.mean([r.slippage for r in performance])
822
+ if avg_slippage > 0.001:
823
+ reasons.append("High slippage observed - adjusting execution parameters")
824
+
825
+ avg_profit = np.mean([r.realized_profit for r in performance])
826
+ if avg_profit < 0:
827
+ reasons.append("Negative average profit - tightening risk controls")
828
+
829
+ if market_conditions.get('volatility', 0) > 0.03:
830
+ reasons.append("Elevated market volatility - implementing conservative measures")
831
+
832
+ if market_conditions.get('liquidity', 1) < 0.5:
833
+ reasons.append("Reduced liquidity conditions - scaling down position sizes")
834
+
835
+ return "; ".join(reasons) if reasons else "Standard market conditions"
836
+
837
+ class RiskAnalytics:
838
+ """Comprehensive risk analytics system with advanced metrics"""
839
+
840
+ def __init__(self):
841
+ self.position_history = []
842
+ self.var_confidence = 0.95
843
+ self.risk_metrics_history = []
844
+ self.correlation_matrix = None
845
+
846
+ def calculate_var(self, returns: np.ndarray, confidence: float = 0.95) -> float:
847
+ """Calculate Value at Risk using historical simulation"""
848
+ if len(returns) < 20:
849
+ return 0.02 # Default 2% VaR
850
+
851
+ return np.percentile(returns, (1 - confidence) * 100)
852
+
853
+ def calculate_cvar(self, returns: np.ndarray, confidence: float = 0.95) -> float:
854
+ """Calculate Conditional Value at Risk (Expected Shortfall)"""
855
+ var = self.calculate_var(returns, confidence)
856
+ return returns[returns <= var].mean()
857
+
858
+ def calculate_sharpe_ratio(self, returns: np.ndarray) -> float:
859
+ """Calculate Sharpe ratio"""
860
+ if len(returns) < 2:
861
+ return 0.0
862
+
863
+ excess_returns = returns - RISK_FREE_RATE / TRADING_DAYS_PER_YEAR
864
+ return np.sqrt(TRADING_DAYS_PER_YEAR) * excess_returns.mean() / (returns.std() + 1e-8)
865
+
866
+ def calculate_sortino_ratio(self, returns: np.ndarray) -> float:
867
+ """Calculate Sortino ratio (downside deviation)"""
868
+ if len(returns) < 2:
869
+ return 0.0
870
+
871
+ excess_returns = returns - RISK_FREE_RATE / TRADING_DAYS_PER_YEAR
872
+ downside_returns = returns[returns < 0]
873
+
874
+ if len(downside_returns) == 0:
875
+ return float('inf') # No downside risk
876
+
877
+ downside_std = np.std(downside_returns)
878
+ return np.sqrt(TRADING_DAYS_PER_YEAR) * excess_returns.mean() / (downside_std + 1e-8)
879
+
880
+ def calculate_max_drawdown(self, equity_curve: np.ndarray) -> float:
881
+ """Calculate maximum drawdown"""
882
+ peak = np.maximum.accumulate(equity_curve)
883
+ drawdown = (peak - equity_curve) / peak
884
+ return np.max(drawdown)
885
+
886
+ def calculate_calmar_ratio(self, returns: np.ndarray, equity_curve: np.ndarray) -> float:
887
+ """Calculate Calmar ratio (return / max drawdown)"""
888
+ max_dd = self.calculate_max_drawdown(equity_curve)
889
+ if max_dd == 0:
890
+ return float('inf')
891
+
892
+ annual_return = returns.mean() * TRADING_DAYS_PER_YEAR
893
+ return annual_return / max_dd
894
+
895
+ def analyze_position_risk(self, positions: List[Dict[str, Any]],
896
+ market_data: Dict[str, pd.DataFrame]) -> Dict[str, Any]:
897
+ """Analyze risk for current positions with comprehensive metrics"""
898
+
899
+ if not positions:
900
+ return self._empty_risk_metrics()
901
+
902
+ # Calculate position values and correlations
903
+ position_values = []
904
+ position_returns = []
905
+
906
+ for position in positions:
907
+ asset = position['asset']
908
+ size = position['size']
909
+
910
+ if asset in market_data:
911
+ price = market_data[asset]['close'].iloc[-1]
912
+ value = size * price
913
+ position_values.append(value)
914
+
915
+ returns = market_data[asset]['close'].pct_change().dropna()
916
+ position_returns.append(returns)
917
+
918
+ total_value = sum(position_values)
919
+
920
+ # Calculate portfolio metrics
921
+ if position_returns:
922
+ # Create weighted portfolio returns
923
+ weights = np.array(position_values) / total_value
924
+ portfolio_returns = np.zeros(len(position_returns[0]))
925
+
926
+ for i, (weight, returns) in enumerate(zip(weights, position_returns)):
927
+ portfolio_returns += weight * returns.values
928
+
929
+ # Calculate all risk metrics
930
+ var = self.calculate_var(portfolio_returns)
931
+ cvar = self.calculate_cvar(portfolio_returns)
932
+ sharpe = self.calculate_sharpe_ratio(portfolio_returns)
933
+ sortino = self.calculate_sortino_ratio(portfolio_returns)
934
+
935
+ # Build equity curve
936
+ equity_curve = (1 + portfolio_returns).cumprod()
937
+ max_dd = self.calculate_max_drawdown(equity_curve)
938
+ calmar = self.calculate_calmar_ratio(portfolio_returns, equity_curve)
939
+
940
+ # Calculate correlation matrix
941
+ returns_df = pd.DataFrame({
942
+ f'asset_{i}': returns.values
943
+ for i, returns in enumerate(position_returns)
944
+ })
945
+ correlation_matrix = returns_df.corr()
946
+ self.correlation_matrix = correlation_matrix
947
+
948
+ avg_correlation = correlation_matrix.values[np.triu_indices_from(
949
+ correlation_matrix.values, k=1)].mean()
950
+ else:
951
+ var = cvar = sharpe = sortino = max_dd = calmar = avg_correlation = 0
952
+
953
+ # Calculate additional risk metrics
954
+ herfindahl_index = sum((v/total_value)**2 for v in position_values) if total_value > 0 else 0
955
+
956
+ risk_metrics = {
957
+ 'total_exposure': total_value,
958
+ 'var_95': var,
959
+ 'cvar_95': cvar,
960
+ 'sharpe_ratio': sharpe,
961
+ 'sortino_ratio': sortino,
962
+ 'max_drawdown': max_dd,
963
+ 'calmar_ratio': calmar,
964
+ 'position_count': len(positions),
965
+ 'avg_correlation': avg_correlation,
966
+ 'concentration_risk': max(position_values) / total_value if total_value > 0 else 0,
967
+ 'herfindahl_index': herfindahl_index,
968
+ 'timestamp': datetime.now()
969
+ }
970
+
971
+ self.risk_metrics_history.append(risk_metrics)
972
+
973
+ return risk_metrics
974
+
975
+ def check_risk_limits(self, proposed_trade: ArbitrageOpportunity,
976
+ current_positions: List[Dict[str, Any]],
977
+ risk_parameters: Dict[str, Any]) -> Tuple[bool, str]:
978
+ """Check if proposed trade violates risk limits"""
979
+
980
+ # Check position limit
981
+ position_value = proposed_trade.max_size * proposed_trade.buy_price
982
+
983
+ if position_value > risk_parameters['max_position_size']:
984
+ return False, "Position size exceeds limit"
985
+
986
+ # Check total exposure
987
+ current_exposure = sum(p['size'] * p['entry_price'] for p in current_positions)
988
+
989
+ if current_exposure + position_value > risk_parameters['max_position_size'] * 5:
990
+ return False, "Total exposure limit exceeded"
991
+
992
+ # Check concurrent trades
993
+ if len(current_positions) >= risk_parameters['max_concurrent_trades']:
994
+ return False, "Maximum concurrent trades reached"
995
+
996
+ # Check correlation with existing positions
997
+ same_asset_positions = [p for p in current_positions if p['asset'] == proposed_trade.asset]
998
+ if same_asset_positions:
999
+ return False, "Already have position in this asset"
1000
+
1001
+ # Check risk/reward ratio
1002
+ if proposed_trade.expected_profit_pct < risk_parameters['min_profit_threshold']:
1003
+ return False, "Profit below minimum threshold"
1004
+
1005
+ # Check latency risk
1006
+ if proposed_trade.latency_risk > risk_parameters.get('max_latency_risk', 0.7):
1007
+ return False, "Latency risk too high"
1008
+
1009
+ return True, "Risk checks passed"
1010
+
1011
+ def _empty_risk_metrics(self) -> Dict[str, Any]:
1012
+ """Return empty risk metrics"""
1013
+ return {
1014
+ 'total_exposure': 0,
1015
+ 'var_95': 0,
1016
+ 'cvar_95': 0,
1017
+ 'sharpe_ratio': 0,
1018
+ 'sortino_ratio': 0,
1019
+ 'max_drawdown': 0,
1020
+ 'calmar_ratio': 0,
1021
+ 'position_count': 0,
1022
+ 'avg_correlation': 0,
1023
+ 'concentration_risk': 0,
1024
+ 'herfindahl_index': 0,
1025
+ 'timestamp': datetime.now()
1026
+ }
1027
+
1028
+ class LatencyAwareExecutionEngine:
1029
+ """Execution engine with realistic latency simulation and smart routing"""
1030
+
1031
+ def __init__(self):
1032
+ self.execution_history = []
1033
+ self.latency_model = self._build_latency_model()
1034
+ self.slippage_model = self._build_slippage_model()
1035
+ self.execution_analytics = defaultdict(list)
1036
+
1037
+ def _build_latency_model(self) -> Dict[str, Dict[str, float]]:
1038
+ """Build latency model for different exchange pairs"""
1039
+ return {
1040
+ 'cex_cex': {'mean': 100, 'std': 20}, # CEX to CEX
1041
+ 'cex_dex': {'mean': 1500, 'std': 500}, # CEX to DEX
1042
+ 'dex_dex': {'mean': 2000, 'std': 800}, # DEX to DEX
1043
+ }
1044
+
1045
+ def _build_slippage_model(self) -> Dict[str, float]:
1046
+ """Build slippage model based on market conditions"""
1047
+ return {
1048
+ 'low_volatility': 0.0005, # 5 bps
1049
+ 'normal': 0.001, # 10 bps
1050
+ 'high_volatility': 0.002, # 20 bps
1051
+ 'extreme': 0.005 # 50 bps
1052
+ }
1053
+
1054
+ def simulate_execution(self, opportunity: ArbitrageOpportunity,
1055
+ buy_exchange: ExchangeSimulator,
1056
+ sell_exchange: ExchangeSimulator,
1057
+ market_conditions: Dict[str, Any]) -> ExecutionResult:
1058
+ """Simulate order execution with realistic latency and slippage"""
1059
+
1060
+ # Determine exchange types
1061
+ exchange_pair = self._get_exchange_pair_type(
1062
+ opportunity.buy_exchange,
1063
+ opportunity.sell_exchange
1064
+ )
1065
+
1066
+ # Simulate latency
1067
+ latency_params = self.latency_model[exchange_pair]
1068
+ total_latency = np.random.normal(
1069
+ latency_params['mean'],
1070
+ latency_params['std']
1071
+ )
1072
+ total_latency = max(0, total_latency) # Ensure non-negative
1073
+
1074
+ # Determine market volatility regime
1075
+ volatility_regime = self._get_volatility_regime(market_conditions)
1076
+ base_slippage = self.slippage_model[volatility_regime]
1077
+
1078
+ # Calculate price movement during latency (correlated with volatility)
1079
+ volatility = market_conditions.get('volatility', 0.02)
1080
+ price_drift = np.random.normal(0, base_slippage * np.sqrt(total_latency / 1000) * (1 + volatility * 10))
1081
+
1082
+ # Simulate buy execution
1083
+ buy_price_adjusted = opportunity.buy_price * (1 + price_drift)
1084
+ buy_book = buy_exchange.generate_order_book(
1085
+ opportunity.asset,
1086
+ buy_price_adjusted,
1087
+ market_conditions=market_conditions
1088
+ )
1089
+
1090
+ buy_fill_price, buy_fill_size = buy_exchange.execute_order(
1091
+ buy_book, 'buy', opportunity.max_size
1092
+ )
1093
+
1094
+ # Simulate sell execution (with additional latency)
1095
+ sell_latency = np.random.normal(50, 10)
1096
+ price_drift_sell = np.random.normal(
1097
+ 0,
1098
+ base_slippage * np.sqrt((total_latency + sell_latency) / 1000) * (1 + volatility * 10)
1099
+ )
1100
+
1101
+ sell_price_adjusted = opportunity.sell_price * (1 - price_drift_sell)
1102
+ sell_book = sell_exchange.generate_order_book(
1103
+ opportunity.asset,
1104
+ sell_price_adjusted,
1105
+ market_conditions=market_conditions
1106
+ )
1107
+
1108
+ sell_fill_price, sell_fill_size = sell_exchange.execute_order(
1109
+ sell_book, 'sell', min(buy_fill_size, opportunity.max_size)
1110
+ )
1111
+
1112
+ # Calculate realized profit
1113
+ executed_size = min(buy_fill_size, sell_fill_size)
1114
+
1115
+ # Transaction costs
1116
+ buy_cost = buy_exchange.transaction_cost * buy_fill_price * executed_size
1117
+ sell_cost = sell_exchange.transaction_cost * sell_fill_price * executed_size
1118
+
1119
+ # Gas costs for DEX
1120
+ gas_cost = 0
1121
+ if 'dex' in opportunity.buy_exchange.lower():
1122
+ gas_cost += GAS_COST_USD
1123
+ if 'dex' in opportunity.sell_exchange.lower():
1124
+ gas_cost += GAS_COST_USD
1125
+
1126
+ # Net profit calculation
1127
+ gross_profit = (sell_fill_price - buy_fill_price) * executed_size
1128
+ net_profit = gross_profit - buy_cost - sell_cost - gas_cost
1129
+
1130
+ # Calculate slippage
1131
+ expected_profit = (opportunity.sell_price - opportunity.buy_price) * executed_size
1132
+ slippage = (expected_profit - gross_profit) / expected_profit if expected_profit > 0 else 0
1133
+
1134
+ # Determine success based on profitability
1135
+ success = net_profit > 0 and executed_size > 0
1136
+
1137
+ result = ExecutionResult(
1138
+ opportunity_id=opportunity.opportunity_id,
1139
+ success=success,
1140
+ executed_size=executed_size,
1141
+ buy_fill_price=buy_fill_price,
1142
+ sell_fill_price=sell_fill_price,
1143
+ realized_profit=net_profit,
1144
+ slippage=slippage,
1145
+ latency_ms=total_latency,
1146
+ gas_cost=gas_cost,
1147
+ timestamp=datetime.now()
1148
+ )
1149
+
1150
+ self.execution_history.append(result)
1151
+
1152
+ # Track execution analytics
1153
+ self.execution_analytics['asset'].append(opportunity.asset)
1154
+ self.execution_analytics['exchange_pair'].append(exchange_pair)
1155
+ self.execution_analytics['volatility_regime'].append(volatility_regime)
1156
+
1157
+ return result
1158
+
1159
+ def _get_exchange_pair_type(self, buy_exchange: str, sell_exchange: str) -> str:
1160
+ """Determine exchange pair type"""
1161
+ buy_is_dex = 'dex' in buy_exchange.lower() or buy_exchange in EXCHANGES['dex']
1162
+ sell_is_dex = 'dex' in sell_exchange.lower() or sell_exchange in EXCHANGES['dex']
1163
+
1164
+ if buy_is_dex and sell_is_dex:
1165
+ return 'dex_dex'
1166
+ elif not buy_is_dex and not sell_is_dex:
1167
+ return 'cex_cex'
1168
+ else:
1169
+ return 'cex_dex'
1170
+
1171
+ def _get_volatility_regime(self, market_conditions: Dict[str, Any]) -> str:
1172
+ """Determine current volatility regime"""
1173
+ volatility = market_conditions.get('volatility', 0.02)
1174
+
1175
+ if volatility < 0.015:
1176
+ return 'low_volatility'
1177
+ elif volatility < 0.03:
1178
+ return 'normal'
1179
+ elif volatility < 0.05:
1180
+ return 'high_volatility'
1181
+ else:
1182
+ return 'extreme'
1183
+
1184
+ def optimize_execution_path(self, opportunities: List[ArbitrageOpportunity],
1185
+ current_positions: List[Dict[str, Any]],
1186
+ risk_parameters: Dict[str, Any]) -> List[ArbitrageOpportunity]:
1187
+ """Optimize execution order considering dependencies and risk"""
1188
+
1189
+ if not opportunities:
1190
+ return []
1191
+
1192
+ # Score opportunities based on multiple factors
1193
+ scored_opportunities = []
1194
+
1195
+ for opp in opportunities:
1196
+ # Multi-factor scoring
1197
+ profit_score = opp.expected_profit_pct
1198
+ latency_penalty = opp.latency_risk * 0.5
1199
+ size_score = min(opp.max_size * opp.buy_price / risk_parameters['max_position_size'], 1.0)
1200
+
1201
+ # Add forecast confidence if available
1202
+ forecast_confidence = 1 - opp.metadata.get('forecast_uncertainty', 0.5)
1203
+
1204
+ # Combined score
1205
+ total_score = profit_score * (1 - latency_penalty) * size_score * forecast_confidence
1206
+
1207
+ scored_opportunities.append((total_score, opp))
1208
+
1209
+ # Sort by score (highest first)
1210
+ scored_opportunities.sort(key=lambda x: x[0], reverse=True)
1211
+
1212
+ # Select top opportunities that don't violate risk limits
1213
+ selected = []
1214
+ simulated_positions = current_positions.copy()
1215
+
1216
+ for score, opp in scored_opportunities:
1217
+ # Simulate adding this position
1218
+ can_add, reason = self._can_add_opportunity(
1219
+ opp, simulated_positions, risk_parameters
1220
+ )
1221
+
1222
+ if can_add:
1223
+ selected.append(opp)
1224
+ simulated_positions.append({
1225
+ 'asset': opp.asset,
1226
+ 'size': opp.max_size,
1227
+ 'entry_price': opp.buy_price
1228
+ })
1229
+
1230
+ if len(selected) >= risk_parameters['max_concurrent_trades']:
1231
+ break
1232
+
1233
+ return selected
1234
+
1235
+ def _can_add_opportunity(self, opportunity: ArbitrageOpportunity,
1236
+ positions: List[Dict[str, Any]],
1237
+ risk_parameters: Dict[str, Any]) -> Tuple[bool, str]:
1238
+ """Check if opportunity can be added to positions"""
1239
+
1240
+ # Check if already have position in asset
1241
+ for pos in positions:
1242
+ if pos['asset'] == opportunity.asset:
1243
+ return False, "Already have position in asset"
1244
+
1245
+ # Check total exposure
1246
+ current_exposure = sum(p['size'] * p['entry_price'] for p in positions)
1247
+ new_exposure = opportunity.max_size * opportunity.buy_price
1248
+
1249
+ if current_exposure + new_exposure > risk_parameters['max_position_size'] * 5:
1250
+ return False, "Would exceed total exposure limit"
1251
+
1252
+ return True, "OK"
1253
+
1254
+ class CrossAssetArbitrageEngine:
1255
+ """Main arbitrage engine coordinating all components"""
1256
+
1257
+ def __init__(self):
1258
+ # Initialize components
1259
+ self.price_forecaster = PriceForecastingEngine()
1260
+ self.arbitrage_detector = ArbitrageDetector()
1261
+ self.strategy_optimizer = LLMStrategyOptimizer()
1262
+ self.risk_analytics = RiskAnalytics()
1263
+ self.execution_engine = LatencyAwareExecutionEngine()
1264
+
1265
+ # Exchange simulators
1266
+ self.exchanges = {}
1267
+ for exchange in EXCHANGES['cex']:
1268
+ self.exchanges[exchange] = ExchangeSimulator('cex')
1269
+ for exchange in EXCHANGES['dex']:
1270
+ self.exchanges[exchange] = ExchangeSimulator('dex')
1271
+
1272
+ # State management
1273
+ self.active_positions = []
1274
+ self.portfolio_value = 100000 # Starting capital
1275
+ self.performance_history = []
1276
+ self.market_data_cache = {}
1277
+ self.forecasts_cache = {}
1278
+
1279
+ def generate_market_data(self, assets: List[str], days: int = 100) -> Dict[str, pd.DataFrame]:
1280
+ """Generate realistic correlated market data for multiple assets"""
1281
+ market_data = {}
1282
+
1283
+ # Generate correlation matrix for assets
1284
+ n_assets = len(assets)
1285
+ correlation_matrix = np.eye(n_assets)
1286
+
1287
+ # Add correlations between assets
1288
+ for i in range(n_assets):
1289
+ for j in range(i+1, n_assets):
1290
+ # Crypto assets are more correlated
1291
+ if (assets[i] in ASSET_CLASSES['crypto_spot'] and
1292
+ assets[j] in ASSET_CLASSES['crypto_spot']):
1293
+ corr = np.random.uniform(0.6, 0.9)
1294
+ # FX pairs have moderate correlation
1295
+ elif (assets[i] in ASSET_CLASSES['fx_pairs'] and
1296
+ assets[j] in ASSET_CLASSES['fx_pairs']):
1297
+ corr = np.random.uniform(0.3, 0.6)
1298
+ # Different asset classes have low correlation
1299
+ else:
1300
+ corr = np.random.uniform(-0.2, 0.3)
1301
+
1302
+ correlation_matrix[i, j] = corr
1303
+ correlation_matrix[j, i] = corr
1304
+
1305
+ # Generate correlated returns
1306
+ mean_returns = np.zeros(n_assets)
1307
+ volatilities = []
1308
+
1309
+ for asset in assets:
1310
+ if asset in ASSET_CLASSES['crypto_spot']:
1311
+ volatilities.append(0.015) # Higher volatility
1312
+ elif asset in ASSET_CLASSES['fx_pairs']:
1313
+ volatilities.append(0.005) # Lower volatility
1314
+ else:
1315
+ volatilities.append(0.01) # Medium volatility
1316
+
1317
+ cov_matrix = np.outer(volatilities, volatilities) * correlation_matrix
1318
+
1319
+ # Generate returns
1320
+ returns = np.random.multivariate_normal(mean_returns, cov_matrix, days)
1321
+
1322
+ # Generate price data for each asset
1323
+ for i, asset in enumerate(assets):
1324
+ # Base price
1325
+ if asset in ASSET_CLASSES['crypto_spot']:
1326
+ base_price = {'BTC': 45000, 'ETH': 3000, 'SOL': 100}.get(asset, 50)
1327
+ elif asset in ASSET_CLASSES['equity_etfs']:
1328
+ base_price = {'SPY': 450, 'QQQ': 380}.get(asset, 100)
1329
+ else:
1330
+ base_price = 1.0 # FX pairs
1331
+
1332
+ # Generate prices from returns
1333
+ prices = base_price * np.exp(np.cumsum(returns[:, i]))
1334
+
1335
+ # Generate OHLCV data
1336
+ dates = pd.date_range(end=datetime.now(), periods=days, freq='H')
1337
+
1338
+ data = pd.DataFrame({
1339
+ 'open': prices * (1 + np.random.normal(0, 0.002, days)),
1340
+ 'high': prices * (1 + np.abs(np.random.normal(0, 0.005, days))),
1341
+ 'low': prices * (1 - np.abs(np.random.normal(0, 0.005, days))),
1342
+ 'close': prices,
1343
+ 'volume': np.random.lognormal(15, 0.5, days)
1344
+ }, index=dates)
1345
+
1346
+ # Ensure OHLC consistency
1347
+ data['high'] = data[['open', 'high', 'close']].max(axis=1)
1348
+ data['low'] = data[['open', 'low', 'close']].min(axis=1)
1349
+
1350
+ market_data[asset] = data
1351
+
1352
+ self.market_data_cache = market_data
1353
+ return market_data
1354
+
1355
+ def update_order_books(self, market_data: Dict[str, pd.DataFrame]) -> Dict[str, Dict[str, OrderBook]]:
1356
+ """Generate current order books for all exchanges"""
1357
+ order_books = defaultdict(dict)
1358
+
1359
+ # Get current market conditions
1360
+ market_conditions = self.calculate_market_conditions(market_data)
1361
+
1362
+ for asset, data in market_data.items():
1363
+ current_price = data['close'].iloc[-1]
1364
+
1365
+ # Generate order books for each exchange
1366
+ for exchange_name, exchange in self.exchanges.items():
1367
+ # Add price variation between exchanges
1368
+ price_variation = np.random.normal(0, 0.0005)
1369
+ adjusted_price = current_price * (1 + price_variation)
1370
+
1371
+ # Vary spread based on exchange type and market conditions
1372
+ base_spread = 5 if exchange.exchange_type == 'cex' else 15
1373
+ volatility_adjustment = 1 + market_conditions['volatility'] * 20
1374
+ spread_bps = base_spread * volatility_adjustment
1375
+
1376
+ order_book = exchange.generate_order_book(
1377
+ asset, adjusted_price, spread_bps, market_conditions
1378
+ )
1379
+
1380
+ order_books[exchange_name][asset] = order_book
1381
+
1382
+ return dict(order_books)
1383
+
1384
+ def calculate_market_conditions(self, market_data: Dict[str, pd.DataFrame]) -> Dict[str, Any]:
1385
+ """Calculate current market conditions"""
1386
+
1387
+ volatilities = []
1388
+ volumes = []
1389
+ spreads = []
1390
+
1391
+ for asset, data in market_data.items():
1392
+ returns = data['close'].pct_change().dropna()
1393
+
1394
+ # Calculate volatility (annualized)
1395
+ volatility = returns.iloc[-24:].std() * np.sqrt(365 * 24)
1396
+ volatilities.append(volatility)
1397
+
1398
+ # Calculate average volume
1399
+ avg_volume = data['volume'].iloc[-24:].mean()
1400
+ volumes.append(avg_volume)
1401
+
1402
+ # Calculate spread proxy
1403
+ spread = (data['high'] - data['low']).iloc[-24:].mean() / data['close'].iloc[-24:].mean()
1404
+ spreads.append(spread)
1405
+
1406
+ return {
1407
+ 'volatility': np.mean(volatilities),
1408
+ 'max_volatility': np.max(volatilities),
1409
+ 'liquidity': np.mean(volumes) / 1e6, # Normalize
1410
+ 'avg_spread': np.mean(spreads),
1411
+ 'timestamp': datetime.now()
1412
+ }
1413
+
1414
+ def generate_price_forecasts(self, market_data: Dict[str, pd.DataFrame]) -> Dict[str, Dict[str, np.ndarray]]:
1415
+ """Generate price forecasts for all assets"""
1416
+ forecasts = {}
1417
+
1418
+ for asset, data in market_data.items():
1419
+ forecast = self.price_forecaster.forecast_prices(asset, data)
1420
+ forecasts[asset] = forecast
1421
+
1422
+ self.forecasts_cache = forecasts
1423
+ return forecasts
1424
+
1425
+ def run_arbitrage_cycle(self) -> Dict[str, Any]:
1426
+ """Run complete arbitrage detection and execution cycle"""
1427
+
1428
+ # Get current market data
1429
+ if not self.market_data_cache:
1430
+ assets = []
1431
+ for asset_class, asset_list in ASSET_CLASSES.items():
1432
+ assets.extend(asset_list[:2]) # Use first 2 from each class
1433
+ self.market_data_cache = self.generate_market_data(assets)
1434
+
1435
+ market_data = self.market_data_cache
1436
+
1437
+ # Generate price forecasts
1438
+ forecasts = self.generate_price_forecasts(market_data)
1439
+
1440
+ # Update order books
1441
+ order_books = self.update_order_books(market_data)
1442
+
1443
+ # Calculate market conditions
1444
+ market_conditions = self.calculate_market_conditions(market_data)
1445
+
1446
+ # Update strategy parameters based on recent performance
1447
+ recent_executions = self.execution_engine.execution_history[-20:]
1448
+ strategy_params = self.strategy_optimizer.generate_parameter_suggestions(
1449
+ recent_executions, market_conditions
1450
+ )
1451
+
1452
+ # Find arbitrage opportunities with forecast integration
1453
+ transaction_costs = {
1454
+ exchange: sim.transaction_cost
1455
+ for exchange, sim in self.exchanges.items()
1456
+ }
1457
+
1458
+ opportunities = self.arbitrage_detector.find_opportunities(
1459
+ order_books, transaction_costs, forecasts
1460
+ )
1461
+
1462
+ # Filter based on strategy parameters
1463
+ filtered_opportunities = [
1464
+ opp for opp in opportunities
1465
+ if opp.expected_profit_pct >= strategy_params['min_profit_threshold']
1466
+ ]
1467
+
1468
+ # Risk analysis
1469
+ risk_metrics = self.risk_analytics.analyze_position_risk(
1470
+ self.active_positions, market_data
1471
+ )
1472
+
1473
+ # Optimize execution order
1474
+ selected_opportunities = self.execution_engine.optimize_execution_path(
1475
+ filtered_opportunities, self.active_positions, strategy_params
1476
+ )
1477
+
1478
+ # Execute selected opportunities
1479
+ execution_results = []
1480
+
1481
+ for opportunity in selected_opportunities:
1482
+ # Final risk check
1483
+ can_execute, reason = self.risk_analytics.check_risk_limits(
1484
+ opportunity, self.active_positions, strategy_params
1485
+ )
1486
+
1487
+ if can_execute:
1488
+ # Execute trade
1489
+ buy_exchange = self.exchanges[opportunity.buy_exchange]
1490
+ sell_exchange = self.exchanges[opportunity.sell_exchange]
1491
+
1492
+ result = self.execution_engine.simulate_execution(
1493
+ opportunity, buy_exchange, sell_exchange, market_conditions
1494
+ )
1495
+
1496
+ execution_results.append(result)
1497
+
1498
+ # Update positions if successful
1499
+ if result.success:
1500
+ self.active_positions.append({
1501
+ 'asset': opportunity.asset,
1502
+ 'size': result.executed_size,
1503
+ 'entry_price': result.buy_fill_price,
1504
+ 'exit_price': result.sell_fill_price,
1505
+ 'profit': result.realized_profit,
1506
+ 'timestamp': result.timestamp
1507
+ })
1508
+
1509
+ # Update portfolio value
1510
+ self.portfolio_value += result.realized_profit
1511
+
1512
+ # Clean up completed positions (for this simulation, all arb trades complete immediately)
1513
+ self.active_positions = [p for p in self.active_positions
1514
+ if (datetime.now() - p['timestamp']).seconds < 300]
1515
+
1516
+ # Store performance metrics
1517
+ cycle_summary = {
1518
+ 'timestamp': datetime.now(),
1519
+ 'opportunities_found': len(opportunities),
1520
+ 'opportunities_executed': len(execution_results),
1521
+ 'successful_executions': sum(1 for r in execution_results if r.success),
1522
+ 'total_profit': sum(r.realized_profit for r in execution_results),
1523
+ 'portfolio_value': self.portfolio_value,
1524
+ 'risk_metrics': risk_metrics,
1525
+ 'market_conditions': market_conditions,
1526
+ 'strategy_parameters': strategy_params
1527
+ }
1528
+
1529
+ self.performance_history.append(cycle_summary)
1530
+
1531
+ return cycle_summary
1532
+
1533
+ # Visualization functions
1534
+ def create_opportunity_network(opportunities: List[ArbitrageOpportunity]) -> go.Figure:
1535
+ """Create network visualization of arbitrage opportunities"""
1536
+
1537
+ # Create graph
1538
+ G = nx.Graph()
1539
+
1540
+ # Add nodes and edges
1541
+ for opp in opportunities:
1542
+ G.add_edge(
1543
+ opp.buy_exchange,
1544
+ opp.sell_exchange,
1545
+ weight=opp.expected_profit_pct,
1546
+ asset=opp.asset
1547
+ )
1548
+
1549
+ if len(G.nodes()) == 0:
1550
+ # Empty graph
1551
+ fig = go.Figure()
1552
+ fig.add_annotation(
1553
+ text="No arbitrage opportunities found",
1554
+ xref="paper", yref="paper",
1555
+ x=0.5, y=0.5, showarrow=False
1556
+ )
1557
+ return fig
1558
+
1559
+ # Calculate layout
1560
+ pos = nx.spring_layout(G, k=2, iterations=50)
1561
+
1562
+ # Create edge trace
1563
+ edge_trace = []
1564
+ for edge in G.edges(data=True):
1565
+ x0, y0 = pos[edge[0]]
1566
+ x1, y1 = pos[edge[1]]
1567
+
1568
+ edge_trace.append(go.Scatter(
1569
+ x=[x0, x1, None],
1570
+ y=[y0, y1, None],
1571
+ mode='lines',
1572
+ line=dict(
1573
+ width=edge[2]['weight'] * 100,
1574
+ color='rgba(125,125,125,0.5)'
1575
+ ),
1576
+ hoverinfo='text',
1577
+ text=f"{edge[2]['asset']}: {edge[2]['weight']*100:.2f}%"
1578
+ ))
1579
+
1580
+ # Create node trace
1581
+ node_trace = go.Scatter(
1582
+ x=[pos[node][0] for node in G.nodes()],
1583
+ y=[pos[node][1] for node in G.nodes()],
1584
+ mode='markers+text',
1585
+ text=[node for node in G.nodes()],
1586
+ textposition="top center",
1587
+ marker=dict(
1588
+ size=20,
1589
+ color=['red' if 'dex' in node.lower() else 'blue' for node in G.nodes()],
1590
+ line=dict(color='darkgray', width=2)
1591
+ ),
1592
+ hoverinfo='text'
1593
+ )
1594
+
1595
+ # Create figure
1596
+ fig = go.Figure(data=edge_trace + [node_trace])
1597
+
1598
+ fig.update_layout(
1599
+ title="Arbitrage Opportunity Network",
1600
+ showlegend=False,
1601
+ xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
1602
+ yaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
1603
+ height=500
1604
+ )
1605
+
1606
+ return fig
1607
+
1608
+ def create_performance_dashboard(performance_history: List[Dict[str, Any]]) -> go.Figure:
1609
+ """Create comprehensive performance dashboard"""
1610
+
1611
+ if not performance_history:
1612
+ fig = go.Figure()
1613
+ fig.add_annotation(
1614
+ text="No performance data available",
1615
+ xref="paper", yref="paper",
1616
+ x=0.5, y=0.5, showarrow=False
1617
+ )
1618
+ return fig
1619
+
1620
+ # Convert to DataFrame
1621
+ perf_df = pd.DataFrame(performance_history)
1622
+
1623
+ # Create subplots
1624
+ fig = make_subplots(
1625
+ rows=3, cols=2,
1626
+ subplot_titles=(
1627
+ 'Portfolio Value', 'Profit per Cycle',
1628
+ 'Success Rate', 'Risk Metrics',
1629
+ 'Opportunities vs Executions', 'Market Conditions'
1630
+ ),
1631
+ specs=[
1632
+ [{"type": "scatter"}, {"type": "scatter"}],
1633
+ [{"type": "scatter"}, {"type": "scatter"}],
1634
+ [{"type": "bar"}, {"type": "scatter"}]
1635
+ ],
1636
+ vertical_spacing=0.1,
1637
+ horizontal_spacing=0.1
1638
+ )
1639
+
1640
+ # Portfolio value
1641
+ fig.add_trace(
1642
+ go.Scatter(
1643
+ x=perf_df['timestamp'],
1644
+ y=perf_df['portfolio_value'],
1645
+ mode='lines',
1646
+ name='Portfolio Value',
1647
+ line=dict(color='blue', width=2)
1648
+ ),
1649
+ row=1, col=1
1650
+ )
1651
+
1652
+ # Profit per cycle
1653
+ fig.add_trace(
1654
+ go.Scatter(
1655
+ x=perf_df['timestamp'],
1656
+ y=perf_df['total_profit'],
1657
+ mode='lines+markers',
1658
+ name='Profit',
1659
+ line=dict(color='green')
1660
+ ),
1661
+ row=1, col=2
1662
+ )
1663
+
1664
+ # Success rate
1665
+ perf_df['success_rate'] = perf_df.apply(
1666
+ lambda x: x['successful_executions'] / x['opportunities_executed'] if x['opportunities_executed'] > 0 else 0,
1667
+ axis=1
1668
+ )
1669
+ fig.add_trace(
1670
+ go.Scatter(
1671
+ x=perf_df['timestamp'],
1672
+ y=perf_df['success_rate'],
1673
+ mode='lines',
1674
+ name='Success Rate',
1675
+ line=dict(color='orange')
1676
+ ),
1677
+ row=2, col=1
1678
+ )
1679
+
1680
+ # Risk metrics (Sharpe ratio)
1681
+ sharpe_values = [m['sharpe_ratio'] for m in perf_df['risk_metrics']]
1682
+ fig.add_trace(
1683
+ go.Scatter(
1684
+ x=perf_df['timestamp'],
1685
+ y=sharpe_values,
1686
+ mode='lines',
1687
+ name='Sharpe Ratio',
1688
+ line=dict(color='purple')
1689
+ ),
1690
+ row=2, col=2
1691
+ )
1692
+
1693
+ # Opportunities vs Executions
1694
+ fig.add_trace(
1695
+ go.Bar(
1696
+ x=perf_df['timestamp'],
1697
+ y=perf_df['opportunities_found'],
1698
+ name='Found',
1699
+ marker_color='lightblue'
1700
+ ),
1701
+ row=3, col=1
1702
+ )
1703
+ fig.add_trace(
1704
+ go.Bar(
1705
+ x=perf_df['timestamp'],
1706
+ y=perf_df['opportunities_executed'],
1707
+ name='Executed',
1708
+ marker_color='darkblue'
1709
+ ),
1710
+ row=3, col=1
1711
+ )
1712
+
1713
+ # Market volatility
1714
+ volatility_values = [m['volatility'] for m in perf_df['market_conditions']]
1715
+ fig.add_trace(
1716
+ go.Scatter(
1717
+ x=perf_df['timestamp'],
1718
+ y=volatility_values,
1719
+ mode='lines',
1720
+ name='Volatility',
1721
+ line=dict(color='red')
1722
+ ),
1723
+ row=3, col=2
1724
+ )
1725
+
1726
+ # Update layout
1727
+ fig.update_layout(
1728
+ height=1000,
1729
+ showlegend=False,
1730
+ title_text="Cross-Asset Arbitrage Performance Dashboard"
1731
+ )
1732
+
1733
+ # Update axes
1734
+ fig.update_xaxes(title_text="Time", row=3, col=1)
1735
+ fig.update_xaxes(title_text="Time", row=3, col=2)
1736
+ fig.update_yaxes(title_text="Value ($)", row=1, col=1)
1737
+ fig.update_yaxes(title_text="Profit ($)", row=1, col=2)
1738
+ fig.update_yaxes(title_text="Rate", row=2, col=1)
1739
+ fig.update_yaxes(title_text="Sharpe", row=2, col=2)
1740
+ fig.update_yaxes(title_text="Count", row=3, col=1)
1741
+ fig.update_yaxes(title_text="Volatility", row=3, col=2)
1742
+
1743
+ return fig
1744
+
1745
+ def create_execution_analysis(execution_history: List[ExecutionResult]) -> go.Figure:
1746
+ """Create execution analysis visualization"""
1747
+
1748
+ if not execution_history:
1749
+ fig = go.Figure()
1750
+ fig.add_annotation(
1751
+ text="No execution data available",
1752
+ xref="paper", yref="paper",
1753
+ x=0.5, y=0.5, showarrow=False
1754
+ )
1755
+ return fig
1756
+
1757
+ # Convert to DataFrame
1758
+ exec_df = pd.DataFrame([
1759
+ {
1760
+ 'timestamp': e.timestamp,
1761
+ 'profit': e.realized_profit,
1762
+ 'slippage': e.slippage,
1763
+ 'latency': e.latency_ms,
1764
+ 'success': e.success
1765
+ }
1766
+ for e in execution_history
1767
+ ])
1768
+
1769
+ # Create subplots
1770
+ fig = make_subplots(
1771
+ rows=2, cols=2,
1772
+ subplot_titles=(
1773
+ 'Profit Distribution', 'Slippage Analysis',
1774
+ 'Latency Distribution', 'Success Rate Over Time'
1775
+ ),
1776
+ specs=[
1777
+ [{"type": "histogram"}, {"type": "scatter"}],
1778
+ [{"type": "histogram"}, {"type": "scatter"}]
1779
+ ]
1780
+ )
1781
+
1782
+ # Profit distribution
1783
+ fig.add_trace(
1784
+ go.Histogram(
1785
+ x=exec_df['profit'],
1786
+ nbinsx=30,
1787
+ name='Profit',
1788
+ marker_color='green'
1789
+ ),
1790
+ row=1, col=1
1791
+ )
1792
+
1793
+ # Slippage over time
1794
+ fig.add_trace(
1795
+ go.Scatter(
1796
+ x=exec_df['timestamp'],
1797
+ y=exec_df['slippage'] * 100, # Convert to percentage
1798
+ mode='markers',
1799
+ name='Slippage',
1800
+ marker=dict(
1801
+ color=exec_df['success'].map({True: 'blue', False: 'red'}),
1802
+ size=8
1803
+ )
1804
+ ),
1805
+ row=1, col=2
1806
+ )
1807
+
1808
+ # Latency distribution
1809
+ fig.add_trace(
1810
+ go.Histogram(
1811
+ x=exec_df['latency'],
1812
+ nbinsx=30,
1813
+ name='Latency',
1814
+ marker_color='orange'
1815
+ ),
1816
+ row=2, col=1
1817
+ )
1818
+
1819
+ # Success rate over time (rolling)
1820
+ exec_df['success_int'] = exec_df['success'].astype(int)
1821
+ exec_df['success_rate_rolling'] = exec_df['success_int'].rolling(
1822
+ window=20, min_periods=1
1823
+ ).mean()
1824
+
1825
+ fig.add_trace(
1826
+ go.Scatter(
1827
+ x=exec_df['timestamp'],
1828
+ y=exec_df['success_rate_rolling'],
1829
+ mode='lines',
1830
+ name='Success Rate',
1831
+ line=dict(color='purple', width=2)
1832
+ ),
1833
+ row=2, col=2
1834
+ )
1835
+
1836
+ # Update layout
1837
+ fig.update_layout(
1838
+ height=700,
1839
+ showlegend=False,
1840
+ title_text="Execution Analysis"
1841
+ )
1842
+
1843
+ # Update axes
1844
+ fig.update_xaxes(title_text="Profit ($)", row=1, col=1)
1845
+ fig.update_xaxes(title_text="Time", row=1, col=2)
1846
+ fig.update_xaxes(title_text="Latency (ms)", row=2, col=1)
1847
+ fig.update_xaxes(title_text="Time", row=2, col=2)
1848
+ fig.update_yaxes(title_text="Count", row=1, col=1)
1849
+ fig.update_yaxes(title_text="Slippage (%)", row=1, col=2)
1850
+ fig.update_yaxes(title_text="Count", row=2, col=1)
1851
+ fig.update_yaxes(title_text="Success Rate", row=2, col=2)
1852
+
1853
+ return fig
1854
+
1855
+ # Gradio Interface
1856
+ def create_gradio_interface():
1857
+ """Create the main Gradio interface"""
1858
+
1859
+ # Initialize engine
1860
+ engine = CrossAssetArbitrageEngine()
1861
+
1862
+ def run_arbitrage_simulation(n_cycles, initial_capital, min_profit_threshold):
1863
+ """Run arbitrage simulation"""
1864
+
1865
+ # Reset engine
1866
+ engine.portfolio_value = float(initial_capital)
1867
+ engine.performance_history = []
1868
+ engine.execution_engine.execution_history = []
1869
+ engine.active_positions = []
1870
+
1871
+ # Update strategy parameters
1872
+ engine.strategy_optimizer.current_parameters['min_profit_threshold'] = float(min_profit_threshold) / 100
1873
+
1874
+ # Generate initial market data
1875
+ assets = []
1876
+ for asset_class, asset_list in ASSET_CLASSES.items():
1877
+ assets.extend(asset_list[:2]) # Use 2 assets from each class
1878
+
1879
+ engine.generate_market_data(assets, days=200)
1880
+
1881
+ # Run simulation cycles
1882
+ cycle_summaries = []
1883
+ for i in range(int(n_cycles)):
1884
+ # Update market data (simulate price movement)
1885
+ for asset, data in engine.market_data_cache.items():
1886
+ # Add new price point
1887
+ last_price = data['close'].iloc[-1]
1888
+ new_return = np.random.normal(0.0001, 0.01)
1889
+ new_price = last_price * (1 + new_return)
1890
+
1891
+ new_row = pd.DataFrame({
1892
+ 'open': [new_price * (1 + np.random.normal(0, 0.002))],
1893
+ 'high': [new_price * (1 + abs(np.random.normal(0, 0.005)))],
1894
+ 'low': [new_price * (1 - abs(np.random.normal(0, 0.005)))],
1895
+ 'close': [new_price],
1896
+ 'volume': [np.random.lognormal(15, 0.5)]
1897
+ }, index=[data.index[-1] + pd.Timedelta(hours=1)])
1898
+
1899
+ # Ensure OHLC consistency
1900
+ new_row['high'] = new_row[['open', 'high', 'close']].max(axis=1)
1901
+ new_row['low'] = new_row[['open', 'low', 'close']].min(axis=1)
1902
+
1903
+ engine.market_data_cache[asset] = pd.concat([data, new_row])
1904
+
1905
+ # Keep only recent data
1906
+ engine.market_data_cache[asset] = engine.market_data_cache[asset].iloc[-200:]
1907
+
1908
+ # Run arbitrage cycle
1909
+ summary = engine.run_arbitrage_cycle()
1910
+ cycle_summaries.append(summary)
1911
+
1912
+ # Create visualizations
1913
+ opportunity_network = create_opportunity_network(
1914
+ engine.arbitrage_detector.opportunity_history[-50:]
1915
+ )
1916
+
1917
+ performance_dashboard = create_performance_dashboard(
1918
+ engine.performance_history
1919
+ )
1920
+
1921
+ execution_analysis = create_execution_analysis(
1922
+ engine.execution_engine.execution_history
1923
+ )
1924
+
1925
+ # Calculate summary statistics
1926
+ total_profit = sum(s['total_profit'] for s in engine.performance_history)
1927
+ total_return = (engine.portfolio_value - initial_capital) / initial_capital
1928
+
1929
+ if len(engine.execution_engine.execution_history) > 0:
1930
+ success_rate = sum(
1931
+ 1 for e in engine.execution_engine.execution_history if e.success
1932
+ ) / len(engine.execution_engine.execution_history)
1933
+ avg_latency = np.mean([
1934
+ e.latency_ms for e in engine.execution_engine.execution_history
1935
+ ])
1936
+ avg_slippage = np.mean([
1937
+ e.slippage for e in engine.execution_engine.execution_history
1938
+ ])
1939
+ else:
1940
+ success_rate = avg_latency = avg_slippage = 0
1941
+
1942
+ # Get latest risk metrics
1943
+ if engine.risk_analytics.risk_metrics_history:
1944
+ latest_risk = engine.risk_analytics.risk_metrics_history[-1]
1945
+ sharpe = latest_risk['sharpe_ratio']
1946
+ var_95 = latest_risk['var_95']
1947
+ else:
1948
+ sharpe = var_95 = 0
1949
+
1950
+ summary_text = f"""
1951
+ ### Simulation Summary
1952
+
1953
+ **Performance Metrics:**
1954
+ - Total Profit: ${total_profit:,.2f}
1955
+ - Total Return: {total_return*100:.2f}%
1956
+ - Final Portfolio Value: ${engine.portfolio_value:,.2f}
1957
+ - Sharpe Ratio: {sharpe:.2f}
1958
+ - VaR (95%): {var_95*100:.2f}%
1959
+
1960
+ **Execution Statistics:**
1961
+ - Total Opportunities Found: {sum(s['opportunities_found'] for s in engine.performance_history)}
1962
+ - Total Executions: {len(engine.execution_engine.execution_history)}
1963
+ - Success Rate: {success_rate*100:.1f}%
1964
+ - Average Latency: {avg_latency:.0f}ms
1965
+ - Average Slippage: {avg_slippage*100:.2f}%
1966
+
1967
+ **Active Positions:** {len(engine.active_positions)}
1968
+ """
1969
+
1970
+ # Latest opportunities table
1971
+ recent_opps = []
1972
+ for opp in engine.arbitrage_detector.opportunity_history[-10:]:
1973
+ recent_opps.append({
1974
+ 'Asset': opp.asset,
1975
+ 'Buy Exchange': opp.buy_exchange,
1976
+ 'Sell Exchange': opp.sell_exchange,
1977
+ 'Spread': f"{(opp.sell_price - opp.buy_price)/opp.buy_price*100:.2f}%",
1978
+ 'Expected Profit': f"${opp.expected_profit:.2f}",
1979
+ 'Latency Risk': f"{opp.latency_risk:.2f}"
1980
+ })
1981
+
1982
+ opportunities_df = pd.DataFrame(recent_opps) if recent_opps else pd.DataFrame()
1983
+
1984
+ return (opportunity_network, performance_dashboard, execution_analysis,
1985
+ summary_text, opportunities_df)
1986
+
1987
+ def analyze_strategy_parameters():
1988
+ """Analyze current strategy parameters"""
1989
+
1990
+ if not engine.strategy_optimizer.parameter_history:
1991
+ return "No parameter history available", ""
1992
+
1993
+ # Get parameter evolution
1994
+ param_history = engine.strategy_optimizer.parameter_history.get('suggestions', [])
1995
+
1996
+ if not param_history:
1997
+ return "No parameter suggestions generated", ""
1998
+
1999
+ # Create parameter evolution chart
2000
+ param_df = pd.DataFrame([
2001
+ {
2002
+ 'timestamp': entry['timestamp'],
2003
+ **entry['parameters']
2004
+ }
2005
+ for entry in param_history
2006
+ ])
2007
+
2008
+ fig = make_subplots(
2009
+ rows=2, cols=2,
2010
+ subplot_titles=(
2011
+ 'Profit Threshold', 'Position Size',
2012
+ 'Risk Limit', 'Confidence Threshold'
2013
+ )
2014
+ )
2015
+
2016
+ # Plot each parameter
2017
+ params_to_plot = [
2018
+ ('min_profit_threshold', 1, 1, 'Threshold'),
2019
+ ('max_position_size', 1, 2, 'Size ($)'),
2020
+ ('risk_limit', 2, 1, 'Limit'),
2021
+ ('confidence_threshold', 2, 2, 'Threshold')
2022
+ ]
2023
+
2024
+ for param, row, col, ylabel in params_to_plot:
2025
+ if param in param_df.columns:
2026
+ fig.add_trace(
2027
+ go.Scatter(
2028
+ x=param_df['timestamp'],
2029
+ y=param_df[param],
2030
+ mode='lines+markers',
2031
+ name=param
2032
+ ),
2033
+ row=row, col=col
2034
+ )
2035
+
2036
+ fig.update_layout(
2037
+ height=600,
2038
+ showlegend=False,
2039
+ title_text="Strategy Parameter Evolution"
2040
+ )
2041
+
2042
+ # Get latest reasoning
2043
+ latest_reasoning = param_history[-1]['reasoning'] if param_history else "No reasoning available"
2044
+
2045
+ return fig, f"**Latest Optimization Reasoning:** {latest_reasoning}"
2046
+
2047
+ # Create interface
2048
+ with gr.Blocks(title="Cross-Asset Arbitrage Engine") as interface:
2049
+ gr.Markdown("""
2050
+ # Cross-Asset Arbitrage Engine with Transformer Models
2051
+
2052
+ This sophisticated arbitrage engine leverages transformer models for price forecasting across multiple asset classes:
2053
+ - **Crypto Spot/Futures**: BTC, ETH, SOL and perpetual futures
2054
+ - **Foreign Exchange**: Major currency pairs
2055
+ - **Equity ETFs**: SPY, QQQ, IWM and international markets
2056
+
2057
+ Features:
2058
+ - **Transformer Price Prediction**: Numerical-adapted transformers for multi-horizon forecasting
2059
+ - **Cross-Venue Execution**: Simulates CEX (Binance, Coinbase) and DEX (Uniswap V3) integration
2060
+ - **LLM Strategy Optimization**: Dynamic parameter adjustment based on performance
2061
+ - **Latency-Aware Execution**: Realistic order routing with slippage simulation
2062
+ - **Comprehensive Risk Analytics**: Real-time VaR, Sharpe ratio, and drawdown monitoring
2063
+
2064
+ Author: Spencer Purdy
2065
+ """)
2066
+
2067
+ with gr.Tab("Arbitrage Simulation"):
2068
+ with gr.Row():
2069
+ with gr.Column(scale=1):
2070
+ n_cycles = gr.Slider(
2071
+ minimum=10, maximum=100, value=50, step=10,
2072
+ label="Number of Trading Cycles"
2073
+ )
2074
+ initial_capital = gr.Number(
2075
+ value=100000, label="Initial Capital ($)", minimum=10000
2076
+ )
2077
+ min_profit = gr.Slider(
2078
+ minimum=0.1, maximum=1.0, value=0.2, step=0.1,
2079
+ label="Minimum Profit Threshold (%)"
2080
+ )
2081
+
2082
+ run_btn = gr.Button("Run Simulation", variant="primary")
2083
+
2084
+ with gr.Row():
2085
+ opportunity_network = gr.Plot(label="Arbitrage Opportunity Network")
2086
+
2087
+ with gr.Row():
2088
+ performance_dashboard = gr.Plot(label="Performance Dashboard")
2089
+
2090
+ with gr.Row():
2091
+ execution_analysis = gr.Plot(label="Execution Analysis")
2092
+
2093
+ with gr.Row():
2094
+ with gr.Column(scale=1):
2095
+ summary_display = gr.Markdown(label="Summary Statistics")
2096
+
2097
+ with gr.Column(scale=1):
2098
+ opportunities_table = gr.DataFrame(
2099
+ label="Recent Arbitrage Opportunities"
2100
+ )
2101
+
2102
+ with gr.Tab("Strategy Analysis"):
2103
+ with gr.Row():
2104
+ analyze_btn = gr.Button("Analyze Strategy Parameters", variant="primary")
2105
+
2106
+ with gr.Row():
2107
+ param_evolution = gr.Plot(label="Parameter Evolution")
2108
+
2109
+ with gr.Row():
2110
+ param_reasoning = gr.Markdown(label="Optimization Reasoning")
2111
+
2112
+ # Event handlers
2113
+ run_btn.click(
2114
+ fn=run_arbitrage_simulation,
2115
+ inputs=[n_cycles, initial_capital, min_profit],
2116
+ outputs=[
2117
+ opportunity_network, performance_dashboard,
2118
+ execution_analysis, summary_display, opportunities_table
2119
+ ]
2120
+ )
2121
+
2122
+ analyze_btn.click(
2123
+ fn=analyze_strategy_parameters,
2124
+ inputs=[],
2125
+ outputs=[param_evolution, param_reasoning]
2126
+ )
2127
+
2128
+ # Add examples
2129
+ gr.Examples(
2130
+ examples=[
2131
+ [50, 100000, 0.2],
2132
+ [30, 50000, 0.3],
2133
+ [100, 200000, 0.15]
2134
+ ],
2135
+ inputs=[n_cycles, initial_capital, min_profit]
2136
+ )
2137
+
2138
+ return interface
2139
+
2140
+ # Launch application
2141
+ if __name__ == "__main__":
2142
+ interface = create_gradio_interface()
2143
+ interface.launch()