diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgcfk" "b/data_all_eng_slimpj/shuffled/split2/finalzzgcfk" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgcfk" @@ -0,0 +1,5 @@ +{"text":"\n\n\\section{Introduction}\n\\label{sec:introduction}\n\\IEEEPARstart{U}{}nderstanding the system model is essential for robotic applications, especially safety-critical autonomous systems. In particular, at high-speed driving, various dynamics elements such as chassis, tires, or engines become crucial to implement high-speed autonomy.\nModel-based optimal control\\cite{lewis2012optimal,jung2021game} is well-suited for handling those factors and is widely used to design dynamics system control.\nBy leveraging physics-based parametric dynamic models, it optimizes driving behavior with respect to a designed objective function and enables safe and reliable control system design.\n\nDespite the success of the model-based approach in robotics, the model-based algorithm has two fundamental challenges: model fidelity and tractability.\nThe performance of model-based approaches relies heavily on the accuracy of the model. However, identifying accurate models is often laborious or intractable because of their large search space and nonlinearity. Other than the model accuracy, models also need to be computationally feasible for the real-time control applications. High-fidelity but highly complex models are often difficult to integrate into real-time safety-critical driving systems.\n\nTo tackle these challenges, conventional approaches, including the Prediction Error Method, are used to identify model parameters\\cite{tangirala2018principles}.\nHowever, those methods often require the model structure to be linear or in specific mathematical forms, which might not be feasible to describe the nonlinear high-speed autonomous driving system.\nOn the other hand, in several recent works, data-driven approaches using neural networks, Gaussian processes, or Bayesian methods have been actively employed for nonlinear system dynamics modeling and have shown promising results\\cite{brunton2022data}. In \\cite{spielberg2019neural}, they proposed a simple neural network to replace a single-track vehicle model and used it to generate feedforward control signals. Similarly, in \\cite{hermansdorfer2021end}, they designed Deep Neural Networks (DNN) as a model approximator to identify the vehicle dynamics model in an end-to-end learning fashion.\nHowever, while DNN is an efficient way to approximate nonlinear systems, it is difficult to integrate with non-learning model-based methods, which are reliable in real-world applications. Furthermore, it is challenging to ensure the validity of the DNN model in unseen driving scenarios without large-scale field tests.\n\nIn this letter, we propose a data-driven model identification method via hyperparameter optimization (MIHO) for high-speed autonomous racing systems. Our key idea is to leverage a parameter optimization approach from machine learning to identify physics-based parametric models in a data-driven manner without any limitation on the form of the model equation.\nTo this end, we adopt a novel hyperparameter optimization (HPO) method that has an efficient exploration and exploitation strategy.\nUsing the proposed method, we estimate the parameters of the integrable parametric dynamics models for a full-scaled autonomous racecar platform, Dallara AV-21 (Fig. \\ref{fig:method_overview}), at the Indy Autonomous Challenge (IAC) \\cite{IAC}.\nWe validate our proposed approach by integrating identified models into the high-speed autonomous system and conducting extensive field experiments, including over $200 km\/h$ autonomous driving and obstacle avoidance scenarios in the Indianapolis Motor Speedway (IMS) and Las Vegas Motor Speedway (LVMS).\n\nIn summary, our technical contributions are as follows:\n\\begin{itemize}\n \\item We propose a data-driven model identification method via hyperparameter optimization.\n \\item We design model-based planning and control systems incorporating the learned vehicle dynamics models.\n \\item We integrate the systems with learned model parameters into the full-scaled autonomous race vehicle and extensively validate them during the IAC.\n\\end{itemize}\n\\section{Model Identification via Hyperparameter Optimization}\n\\label{sec:model_identification}\nVehicle dynamic models allow us to describe the race vehicle's motion accurately. However, dynamics models with high fidelity are often challenging to identify due to their high nonlinearity and a large number of model parameters. Therefore, an efficient parameter estimation approach is necessary to find the parameter configuration of such complex models.\nIn this letter, we propose a model identification method via hyperparameter optimization (MIHO) to learn the optimal model parameter configuration by a data-driven approach.\nHyperparameter optimization (HPO) is the problem of selecting an optimal hyperparameter configuration required for neural network training in the machine learning field \\cite{feurer2019hyperparameter}.\nA hyperparameter is a parameter that controls the training process.\nHPO optimizes the hyperparameter configuration by evaluating the performance of the configuration during the model training process.\nSince one course of neural network training requires a substantial time, HPO focuses on the balanced exploration and exploitation strategy for the efficient optimal hyperparameter selection.\\cite{feurer2019hyperparameter}.\nMotivated by the balanced strategy, we design MIHO by adopting the HPO to the model identification problem. First, we regard a parameter configuration $p \\in \\mathbb{R}^{n_p}$ with $n_p$ model parameters as a set of hyperparameters of a nonlinear dynamics model $f$. Then, we identify the parameter configuration by evaluating the following objective function inspired by the standard supervised learning problem:\n\\begin{equation}\n \\label{eq:objective}\n \\mathcal{L} = \\frac{1}{\\lvert D \\rvert} \\sum_{(x,y) \\in D} \\lVert y - {f}(x; p) \\rVert^2,\n\\end{equation}\nwhere $x, y$ denote the sampled input and output data of the model $f$ from a given dataset $D$. By minimizing this learning objective, we find an optimized model parameter configuration $p^*$ that has the minimum model error with the observed model output $y$.\n\\begin{algorithm}[b]\n\\caption{MIHO Algorithm based on Hyperband}\n\\textbf{Input: } $R, \\eta, D$\n\\begin{algorithmic}[1]\n\\State $ s_{max} \\leftarrow \\lfloor \\text{log}_{\\eta}(R) \\rfloor, B = (s_{max} + 1)R $\n \n\\For{$s \\in \\{s_{max},s_{max}-1, ..., 0 \\}$}\n \\State $n = \\lceil \\frac{B}{R} \\frac{\\eta^{s}}{(s+1)} \\rceil, r = R\\eta^{-s}$\n \\State $P = \\text{get\\_model\\_param\\_config}(n)$ \n \\For{$j \\in \\{ 0, ..., s \\}$}\n \\State $ n_j = \\lfloor n\\eta^{-j} \\rfloor, r_j = r\\eta^j $\n \\State $ L = \\{ \\text{eval\\_with\\_mutation}(p,r_j,D) : p \\in P \\} $\n \\State $ P = \\text{select\\_top\\_k\\_config}(P, L, \\lfloor n_j\/\\eta \\rfloor) $\n \\EndFor\n\\EndFor\n\\end{algorithmic}\n\\textbf{Output: } \\text{Optimized parameters $p^*$ with the smallest loss.}\n\\label{algo:MIHO}\n\\end{algorithm}\nThe model $f$ has no limitation on its form of the equation. Thus, our method can be used for arbitrary parametric models, such as a combination of polynomial or mathematical terms, as well as analytic physics-based models.\n\nWe implement MIHO incorporating a bandit-based HPO algorithm, Hyperband\\cite{li2017hyperband}, as summarized in Algorithm \\ref{algo:MIHO}.\nIt is a variation of a random search algorithm with explore-exploit theory to find the optimal hyperparameter configuration based on an evaluation loss.\nThe algorithm needs two arguments: $R$, the maximum amount of resource (e.g., the number of evaluation iterations) that can be allocated to a single configuration, and $\\eta$, a value that determines the proportion of the discarded configurations.\nThe two arguments derive $s_{max}+1$ combinations (called \"brackets\" in \\cite{li2017hyperband}) of the values $n$ and $r$, which enables various ratios of exploration and exploitation for finding the optimal parameter configuration.\nHyperband compares the evaluation loss of each sampled configuration and allocates more resources to the configurations with lower evaluation losses, excluding the configurations with higher losses. It repeats the sampling and exclusion processes until the last configuration remains to obtain the optimal set of hyperparameters.\nTo adjust the HPO algorithm to the model parameter optimization, we add the Gaussian mutation \\cite{mutation2019genetic} during the evaluation to explore the new neighbor parameters that might have less model loss. Unlike the original HPO, which only allocates more resources $r_i$, our approach, MIHO, adds noise perturbation with a noise random variable $\\epsilon \\in \\mathbb{R}^{n_p}$ at the selected configuration $p$ after the resource allocation as:\n\\begin{equation}\n \\label{eq:gaussian_mutation}\np_{mut} = p + \\sigma \\odot \\epsilon, \\quad \\epsilon \\sim N(0, I),\n\\end{equation}\nwhere $\\odot$ is the element-wise product and $\\sigma \\in \\mathbb{R}^{n_p}$ is the standard deviation of the exploration noise that is annealed over the course of the evaluation \\cite{kirkpatrick1983optimization}.\nWe define the following three functions for the HPO process in MIHO:\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.44\\textwidth]{figures\/method_overview.png}\n\\caption{\nOverview of our autonomous driving system in the AV-21. Our learned model parameters are embedded in the planning and control modules that are covered in this letter (highlighted in blue).\nSeveral input variables are omitted for clarity.\n}\n\\label{fig:method_overview}\n\\vspace{-1.5em}\n\\end{figure}\n\\begin{itemize}\n\\item \\textit{$\\text{get\\_model\\_param\\_config}(n)$}\\text{:} a function that returns a set of $n$ random parameter configurations from the normal distribution pre-defined over the configuration space.\n\n\\item \\textit{$\\text{eval\\_with\\_mutation}(p,r_j,D)$}\\text{:} a function that receives a parameter configuration $p$, an allocated resource $r_j$, and a dataset $D$ as arguments. Using the dataset, this function evaluates an initial configuration and mutates it for the allocated $r_j$ iterations by Eq. \\ref{eq:gaussian_mutation}.\nIf a mutated configuration $p_{mut} \\in \\mathbb{R}^{n_p}$ has a less loss than the initial one, the function replaces $p$ with $p_{mut}$. It returns the final loss after spending the allocated resources.\n \n\\item \\textit{$\\text{select\\_top\\_k\\_config}(P, L, k)$}\\text{:} a function that receives a set of hyperparameter configurations $P$ with their corresponding evaluation losses $L$ and returns the top $k$ high-performing configurations (here, $k = \\lfloor n_j\/\\eta \\rfloor$).\n\\end{itemize}\n\\section{Vehicle Dynamics Model}\n\\label{sec:vehicle_dynamics_model}\n\\subsection{Tire Dynamics Model}\nTire dynamics is one of the factors that significantly affect the nonlinearity of driving dynamics. Especially the lateral tire model is crucial to design stable path-tracking control in high-speed driving.\nThe tire model\\cite{bakker1987tyre} can be described as a function of the slip angle $\\alpha_i$, slip ratio $\\rho_{x,i}$, inclination angle $\\theta_i$, tire load $F_{z,i}$, and current velocity $v_{x,i}$, which has a lateral tire force $F^{*}_{y,i}$ of each tire ($i \\in \\{LF, LR, RF, RR\\}$) as,\n\\begin{equation}\n \\label{eq:pacejka_tire_function}\n \\begin{aligned}\n F^{*}_{y,i} = f_{tire}(\\alpha_i, \\rho_{x,i}, \\theta_i, F_{z,i}, v_{x,i}).\n \\end{aligned}\n\\end{equation}\nAlthough the model has high fidelity with various dynamics perspectives, it has low suitability for designing the controller of high-speed driving, which requires real-time performance.\nTherefore, we first define a tire model with dimension-reductio\nthat can be applied to model-based control design within an acceptable complexity. We then optimize the model's parameter configuration to represent the overall tire characteristic of a given dataset using our MIHO algorithm.\nWe follow the Pacejka tire model\\cite{kabzan2020amz} to define the tire dynamics.\nWhile the prior work neglect vertical and horizontal offsets of the model, we formulate a tire model $F_{y,i} = f_{t,i}(\\alpha_i; {p}_{t,i})$ containing the offset parameters $S_{x,i}, S_{y,i}$ to describe the asymmetric tire characteristic determined to maximize cornering performance on an oval track:\n\\begin{equation}\n \\label{eq:tire_model}\n \\begin{aligned}\n F_{y,i} &= D_i \\sin(C_i \\arctan(B_i (\\alpha_i + S_{x,i}))) + S_{y,i},\n \\end{aligned}\n\\end{equation}\nwhere the tire model parameter configuration ${p}_{t,i} = \\{B_i, C_i, D_i, S_{x,i}, S_{y,i}\\}$ is identified by minimizing the following tire model objective with a given dataset $D_{t,i}$ as\n\\begin{equation}\n \\label{eq:objective_tire}\n \\mathcal{L}_{t,i} = \\frac{1}{\\lvert D_{t,i} \\rvert} \\sum_{(\\alpha_i, F^{*}_{y,i}) \\in D_{t,i}} \\lVert F^{*}_{y,i} - f_{t,i}(\\alpha; {p}_{t,i}) \\rVert^2.\n\\end{equation}\n\n\\subsection{Engine Torque Model}\nThe powertrain system of our racecar consists of an internal combustion engine, transmission, and wheels. The AV-21 is a rear-wheel-drive vehicle whose traction force $F_{x,r}$ is generated by engine-based driveline dynamics. We model the equation of the longitudinal dynamics\\cite{rajamani2011vehicle} as follows:\n\\begin{equation}\n\\label{eq:longi_model}\n\\begin{aligned}\n m a_{x} = F_{x,r} - C_d v^{2}_{x} - C_r,\n\\end{aligned}\n\\end{equation}\nwhere $m$ is the vehicle mass, $v_x$ is the longitudinal velocity, $C_d$ denotes the drag coefficient, and $C_r$ denotes the rolling resistance.\nFollowing a prior work \\cite{engineSAGE16}, the traction force can be expressed as:\n\\begin{equation}\n\\label{eq:traction_force}\n\\begin{aligned}\n F_{x,r} = m a_{x,r} = \\frac{T_{e} \\eta_{t} i_{g} i_{0}}{R_w},\n\\end{aligned}\n\\end{equation}\nwhere $a_{x,r}$ denotes the traction acceleration, $\\eta_{t}$ denotes the efficiency of the transmission, $i_{g}, i_{0}$ denote the transmission ratio of the current gear and final reducer, and $R_w$ denotes the wheel radius. $T_{e} = f_e (w_e, \\tau_t)$ is the engine torque map in terms of the engine speed $w_e$ and throttle command $\\tau_t$. Due to the high complexity of the engine characteristic, the torque map is expressed as an experimental lookup table based on engine torque curves of specific throttle opening commands.\nAlthough the engine torque model can be obtained by the engine dynamometer testing\\cite{killedar2012dynamometer}, it could suffer from modeling errors because the dynamometer testing is done in a static environment. Therefore, we build the engine torque map that integrates the dyno data with learned torque curves based on our data-driven model identification approach. We express an engine torque curve $T_{e,\\tau_{t}} = f_{\\tau_{t}}(w_e; {p}_{\\tau_{t}})$ of a throttle command $\\tau_t$ as a third order polynomial function of the engine speed $w_e$ as:\n\\begin{equation}\n\\label{eq:engine_map}\n\\begin{aligned}\n T_{e,\\tau_{t}} = p_{\\tau_{t},0} + p_{\\tau_{t},1} w_e + p_{\\tau_{t},2} w^{2}_e + p_{\\tau_{t},3} w^{3}_e,\n\\end{aligned}\n\\end{equation}\nwhere $p_{\\tau_{t}} = \\{p_{\\tau_{t},0}, p_{\\tau_{t},1}, p_{\\tau_{t},2}, p_{\\tau_{t},3}\\}$ is the torque model parameter configuration. Using the resultant traction accelerations $a_{x,r}$ while driving, the engine torque output $T^{*}_{e,\\tau_{t}}$ is obtained by Eq. \\ref{eq:traction_force}.\nThen, $p_{\\tau_{t}}$ is learned by minimizing the following engine model objective with a given dataset $D_{\\tau_{t}}$:\n\\begin{equation}\n \\label{eq:objective_engine}\n \\mathcal{L}_{\\tau_{t}} = \\frac{1}{\\lvert D_{\\tau_{t}} \\rvert} \\sum_{(w_{e}, T^{*}_{e,\\tau_{t}}) \\in D_{\\tau_{t}}} \\lVert T^{*}_{e,\\tau_{t}} - f_{\\tau_{t}}(w_e; {p}_{\\tau_{t}}) \\rVert^2.\n\\end{equation}\nTo stabilize the learning process, we normalize the engine speed in the range [0,1] with the maximum engine speed.\n\n\n\\section{Model-based Planning and Control}\n\\label{sec:model_based_vehicle_control}\nIn this session, we introduce a model-based planning and control algorithm that uses the learned model parameters. We exploit the learned tire parameters to design a dynamics-aware velocity planning and model-based lateral controller (Fig. \\ref{fig:method_overview}). We also integrate the engine dyno data with the learned engine torque model to construct an engine lookup table.\n\\subsection{Dynamics-aware Velocity Planning}\nHigh-speed cornering during racing causes significant tire load transfer on each wheel due to a lateral acceleration at the roll axis.\nSince the tire load governs the maximum performance of the tire, a model-based velocity strategy accounting for the real-time wheel load is necessary to maximize the tire performance without losing tire grip.\nWe introduce a dynamics-aware velocity planning algorithm that derives the velocity plans with maximum tire performance based on the learned tire dynamics. We first compute the real-time vertical tire load $F_{z,i}$ affected by the lateral load transfer $\\Delta W_f$\\cite{seward2017race}. The diagram of the load transfer at the roll axis is illustrated in the left of Fig. \\ref{fig:method_vehicle_dynamics_control}. The load transfer is computed by the roll couple $C_{roll} = m_s \\dot{v}_y h_a$, where $m_s$ is the sprung mass, $\\dot{v}_y$ is the lateral acceleration, and $h_a$ is the roll height. As the learned tire model describes the characteristic for the nominal tire load $\\bar{F}_{z,i}$, we compute the maximum lateral force of each tire $F^{max}_{y,i}$ in terms of the tire load ratio with peak value of the tire model as:\n\\begin{equation}\n\\label{eq:max_lat_tire}\n\\begin{aligned}\n F^{max}_{y,i} &= \\mu \\frac{F_{z,i}}{\\bar{F}_{z,i}} F^{peak}_{y,i},\n\\end{aligned}\n\\end{equation}\nwhere $\\mu$ is a tire performance factor to control the confidence and maximum performance of the tire model, $\\frac{{F}_{z,i}}{\\bar{F}_{z,i}}$ is the tire load ratio.\nThe maximum lateral acceleration is determined by the following lateral motion dynamics \\cite{kabzan2020amz}:\n\\begin{equation}\n\\label{eq:lat_accel_limit}\n\\begin{aligned}\n a_{y,max} = \\frac{1}{m} (F^{max}_{y,r} + F^{max}_{y,f} cos(\\delta) - m v_x \\dot{\\psi}),\n\\end{aligned}\n\\end{equation}\nwhere $\\delta$ is the steering angle and $\\dot{\\psi}$ is the yaw rate.\nThen a desired maximum velocity $v_{x,des}$ is planned according to the curvature $\\kappa$ of a reference path from a planning module \\cite{lee2022resilient}:\n\\begin{equation}\n\\label{eq:velocity_limit}\n\\begin{aligned}\n v_{x,des} = \\sqrt{a_{y,max} \/ \\kappa}.\n\\end{aligned}\n\\end{equation}\n\n\\subsection{Throttle and Brake Control}\nThe planned desired velocity is fed to a feedback control module \\cite{doyle2013feedback} to compute the traction force. However, as shown in Fig \\ref{fig:method_overview}, another low-level controller to transform the traction force to the throttle command is required to control the racecar with nonlinear driveline dynamics. We design the throttle and brake control system following \\cite{hedrick1997brake}.\nFig. \\ref{fig:method_throttle_brake_control} shows the details of the low-level control system.\nWe exploit the integrated engine torque map to convert the desired engine torque $T_{e,des}$ to the desired throttle command $\\tau_{t,des}$.\nAs the torque map is built as a lookup table, we search the desired throttle with respect to a given engine speed and desired torque.\nThe inverse brake model is a module to convert the braking force to the brake pedal command, which is activated if $F_{x,r}$ is negative. The brake model is also attained by our proposed MIHO, but details are omitted to conserve space.\n\n\\subsection{Model-based Path Tracking Control}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.46\\textwidth]{figures\/method_vehicle_dynamics_for_control.png}\n\\caption{\n\\textbf{Left: } Lateral load transfer generation by the lateral acceleration at the roll axis. \n\\textbf{Right: } Overall diagram of the vehicle model.\n}\n\\label{fig:method_vehicle_dynamics_control}\n\\vspace{-1em}\n\\end{figure}\nWe follow the lateral vehicle dynamics of \\cite{rajamani2011vehicle} illustrated in the right of Fig \\ref{fig:method_vehicle_dynamics_control}. The lateral model is derived from the objective of tracking a reference trajectory. We implement path tracking control by stabilizing a velocity-dependent chassis model in terms of the error state variables $\\xi$ and control $u$.\n\\begin{equation}\n\\label{eq:state_control}\n\\begin{aligned}\n \\xi = [e_y, \\dot{e}_y, e_{\\psi}, \\dot{e}_{\\psi}]^T, \\quad u = \\delta,\n\\end{aligned}\n\\end{equation}\nwhere $e_y, e_{\\psi}$ denote the position and orientation error with respect to a given trajectory. The lateral model contains tire-related model parameters such as the cornering stiffnesses of the front and rear tires $C_{\\alpha,f}, C_{\\alpha,r}$. Since the lateral dynamics is obtained from the bicycle model, the front and rear tire models are optimized according to Eq \\ref{eq:tire_model}, but the sum of the left and right tire forces is used as the model output. The cornering stiffnesses then can be approximated as follows \\cite{rajamani2011vehicle}:\n\\begin{equation}\n\\label{eq:cornering_stiffness}\n\\begin{aligned}\n C_{\\alpha f} &\\approx B_{f} \\times C_{f} \\times D_{f}, \\quad C_{\\alpha r} \\approx B_{r} \\times C_{r} \\times D_{r}.\n\\end{aligned}\n\\end{equation}\n\nBased on the lateral vehicle model, we design the Linear Quadratic Regulator with the following optimization problem: \n\\begin{equation}\n\\label{eq:LQR}\n\\begin{aligned}\n \\underset{u}{\\text{min }} \\int_{0}^{\\infty} (\\xi^T Q \\xi + u^T R u ) dt,\n\\end{aligned}\n\\end{equation}\nwhere $Q, R$ denote gain matrices for LQR.\nFor the real-time control performance, we compute the state feedback optimal LQR gains over piecewise velocity intervals offline\\cite{spisak2022robust}.\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{figures\/method_throttle_brake_control.png}\n\\caption{\nThrottle and brake control system. \n}\n\\label{fig:method_throttle_brake_control}\n\\vspace{-1em}\n\\end{figure}\n\\section{Evaluation}\n\\label{sec:evaluation}\n\\subsection{Analysis for Model Identification}\n\\subsubsection{Tire Dynamics Model}\nFig. \\ref{fig:result_tire_model} illustrates the learned tire models with datasets provided as the tire property files (*.tir).\nThe files contain tire force and moment characteristics with high fidelity\\cite{schmeitz2013mf}.\nFor model identification, we sampled 3000 data for each tire using the property files in various tire state conditions such as tire load, camber angle, slip angle, and slip ratio.\nAs illustrated in the left and middle of Fig. \\ref{fig:result_tire_model}, the learned models show good fitness to the tire characteristic distribution.\nWe further investigated the tire model with the test drive data collected during track racing.\nThe right of Fig. \\ref{fig:result_tire_model} illustrates the front tire model of the single-track bicycle dynamics learned by the provided tire property data comparing it with the driving data that is not used for learning.\nThe learned model shows the generalization ability for the overall data distribution represented by blue dots.\nHowever, since we obtained the model by offline optimization and focused on the representativeness of data, the model needs higher accuracy in some edge cases near the peaks of the lateral force.\nTo handle these cases, an online parameter optimization can be used by parallelizing the HPO process in MIHO, and we will implement it in future work.\n\n\\subsubsection{Engine Torque Model}\nFig. \\ref{fig:result_engine_brake} illustrates the learned engine torque curves and integrated engine map. The data for the engine map was provided by engine dynamometer testing. \nFor higher reliability, we incorporated our data-driven engine torque models with the dyno data, especially for the throttle pedals 5, 15, and 20\\%, where the dynamometer showed insufficient accuracy in torque measurements.\nThe result shows that the learned torque curves are able to represent the change of the maximum torque according to the throttle commands. Moreover, the learned models also fit the torque curves that change nonlinearly as a function of engine speed.\nWe integrated the learned torque models with the provided dyno data and interpolated the torque data to construct an engine lookup table. The blue area on the right of Fig \\ref{fig:result_engine_brake} shows the interpolated region by the learned torque curves.\nOur vehicle utilized the learned region in racing scenarios such as pit-in\/out, obstacle avoidance, and driving within $100 km\/h$ (Fig. \\ref{fig:result_IMS_obstacle_only}).\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=0.86\\textwidth]{figures\/learned_tire_model.png}\n\\caption{\nLearned tire models with the provided tire data (\\textbf{Left:} left-front and right-front, \\textbf{Middle:} left-rear and right-rear). \\textbf{Right:} \nLearned front tire dynamics of the single-track bicycle model and the distribution of the collected data on the track.\n}\n\\label{fig:result_tire_model}\n\\vspace{-1em}\n\\end{figure*}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figures\/learned_engine_only.png}\n\\caption{\nLearned engine torque curves and the integrated engine map.\n}\n\\label{fig:result_engine_brake}\n\\vspace{-1em}\n\\end{figure}\n\\subsection{Control Performance in Indy Autonomous Challenge}\nOur model-based planning and control algorithms were deployed in the full-scale racecar platform. Moreover, we extensively validated our learned model parameter-based algorithms in the real-world race tracks, IMS and LVMS.\nThe algorithms successfully performed various race scenarios, such as obstacle avoidance and high-speed autonomous driving over $200 km\/h$ on the race tracks (Fig. \\ref{fig:result_IMS_velocity_perform}).\n\n\\subsubsection{Obstacle Avoidance at the IMS}\nFig. \\ref{fig:result_IMS_obstacle_only} shows the quantitative results of the obstacle avoidance mission.\nIn this mission, obstacles are located before the first-corner section, where the velocity plan is critical for avoiding collision while keeping close to the racing line. For the sake of safety, we set the tire performance factor $\\mu$ as 0.7.\nOur dynamics-aware velocity planner was able to allow the racecar to maximize the velocity while regulating the lateral acceleration within the learned maximum tire performance during the rapid avoidance maneuvers.\nThe obstacle avoidance was initiated in high-speed driving at $100 km\/h$, and steering commands were computed up to -10 degrees to follow the generated collision-free reference trajectory.\nThe sharp steering command could cause significant lateral acceleration higher than $9.0 m\/s^2$, which might cause critical tire grip loss. \nHowever, our tire model-based velocity planner inferred the allowable maximum lateral acceleration based on the real-time tire load. As a result, it was possible to plan for safe desired velocity within lateral acceleration limit capable of preserving the tire grip performance.\n\\subsubsection{High-Speed Autonomous Driving in LVMS}\nFurthermore, we extensively validated the control performance based on the optimized model parameters at LVMS.\nFig. \\ref{fig:result_LVMS_tracking_perform_only} illustrates the quantitative results of the lateral and longitudinal control while our vehicle raced more than nine laps ($23 km$). Our path-tracking algorithm shows robust control performance leveraging the learned tire parameters. The largest position and orientation errors were $0.6 m$ and $-2.2$ degrees, respectively.\nIn addition, the AV-21 succeeded high-speed autonomous driving at above $144 km\/h$ (with a top speed of $217.4 km\/h$), where the dynamic scenario had yet to be visited and adjusted before this track experiment.\nThese results demonstrate that MIHO can optimize and provide appropriate prior dynamics models offline for the design of model-based control before deployment.\nHowever, the characteristic of vehicle dynamics changed and affected the control performance over high-speed driving. As shown in the bottom of Fig. \\ref{fig:result_LVMS_tracking_perform_only}, the tire temperatures were increased after reaching the unseen velocity range. In addition, after visiting the range of over $144 km\/h$, our low-level controller computed throttle command of over 50\\% with the engine range consisting only of the provided dyno data. Those factors might have an effect on the velocity error in the velocity range $[42, 52] m\/s$ of Fig \\ref{fig:result_LVMS_tracking_perform_only}.\nNevertheless, the control system can be improved with an extended data-driven engine map for throttle control at high-speed.\nWe also point out that our method has the potential to be processed online by parallelizing the HPO process\\cite{li2017hyperband}, which enables the method to identify the model in real-time during deployment. The online MIHO could be incorporated with the offline model optimization introduced in this work, and we leave it as an important future work.\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=0.80\\textwidth]{figures\/result_scenario.png}\n\\caption{\n\\textbf{Left: } Team KAIST's successful obstacle avoidance at IMS. The bottom illustrates the point cloud data and traveled trajectory during avoidance.\n\\textbf{Right: } Our AV-21 drove more than nine laps ($23 km$) at the Tri-Oval Superspeedway of LVMS.\n}\n\\label{fig:result_IMS_velocity_perform}\n\\vspace{-1.5em}\n\\end{figure*}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{figures\/result_IMS_obstacle_only.png}\n\\caption{\nResults of the velocity control, lateral accelerations, and steering angles during the obstacle avoidance mission at the IMS.\n}\n\\label{fig:result_IMS_obstacle_only}\n\\vspace{-1.5em}\n\\end{figure}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{figures\/result_LVMS_tracking_perform_only.png}\n\\caption{\nResults of the errors, velocity control, throttle\/brake controls, and temperature of the right-rear and right-front tires in LVMS.\n}\n\\label{fig:result_LVMS_tracking_perform_only}\n\\vspace{-1.5em}\n\\end{figure}\n\\section{Conclusion}\n\\label{sec:conclusion}\nWe present MIHO, a data-driven model identification method via hyperparameter optimization. Our approach showed the ability to optimize the parameters of the dynamics models, such as the tire models and engine torque curves. Furthermore, the model-based planning and control system with the learned model parameters demonstrated stable performance in the real-world track environments, IMS and LVMS. In future works, we will implement the online HPO method and integrate it with the offline method of this work to iteratively infer the changing parameters of the vehicle dynamics while on track.","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAbout 30-40\\% of the Galactic WR stars have a visible OB-type companion (Vanbeveren and Conti, 1980; van der Hucht, 2001; Crowther, 2007). Detailed massive single star and massive binary population synthesis\\footnote {including a model to account for mergers and a formalism to account for the effects on binary parameters of asymmetrical supernova explosions, a process responsible for separating binaries and thus for making single stars which were born as binary components} makes it then possible to answer the question: `What must be the initial O-type close binary frequency to explain the observed WR+OB binary frequency?'. An answer was presented by Vanbeveren et al. (1998a, b, c) who concluded that the initial massive close binary frequency must be at least 70\\%, at that time a result that was hard to accept by the massive star scientific community. Fortunately, this predicted high O-type close binary percentage was observationally confirmed about 1.5 decades later by Sana et al. (2012) and it is now generally accepted that most of the massive stars are born in close binaries.\n\n`Close' means that at some point during the binary evolution Roche lobe overflow (RLOF) will happen. RLOF may be accompanied by mass transfer, may result in the formation of a common envelope (note that during a common envelope phase mass transfer is not expected to happen) and\/or may result in the merger of both binary components. The interested reader may find useful information regarding these processes and the uncertainties in extended reviews and references therein (van den Heuvel, 1993; Vanbeveren et al., 1998b, c; Langer, 2012; De Marco \\& Izzard, 2017). Massive close binary evolution and the study of the binary processes listed above have been the subject of numerous papers since the sixties but it is fair to state that the RLOF with all its facets is still not fully understood and it therefore requires further theoretical and observational research.\n\nExcept for about a quarter of Galactic massive WR stars, which are H-rich main-sequence stars of very high mass, most of the remaining Galactic WR-stars are hydrogen deficient core helium burning stars. Their progenitors have therefore experienced extensive mass loss. In the case of single stars this is possibly due to an LBV-type and\/or a red supergiant-type mass loss. When the WR-star is a binary component its progenitor may have lost its hydrogen rich layers by RLOF\/common envelope evolution. A comparison between the observations of WR+OB binaries and evolutionary prediction may therefore help to understand the RLOF\/common envelope evolution of their progenitors. Earlier attempts can be found in e.g., Vanbeveren et al., 1998 b, c; Petrovic et al., 2005a; Eldridge, 2009; Shenar et al., 2016.\n\nMass transfer is accompanied by angular momentum transfer and the mass gainer is expected to spin up. When mass transfer\/accretion goes via an accretion-disk Packet (1981) showed that this spin-up is very rapid and that soon after the onset of the RLOF the gainer will rotate at its critical break-up velocity. The possible effects of binary interaction on the rotation rates and the rotation velocity distribution of massive stars was investigated by De Mink et al. (2013). It was concluded that a significant percentage of all rapid rotators (maybe all of them) could have a binary origin. The authors note that a main uncertainty affecting this conclusion is the neglect of magnetic fields generated during mass accretion and stellar mergers (see also section 4). \n\nTo test these theoretical expectations the observed spin-rates of the O components in WR+O binaries may be most illuminating. Shara et al. (2017) measured and analyzed the spin rates of the O stars in eight WR+O binaries using the Southern African Large Telescope (SALT) increasing the Galactic sample size from 3 to 11. The 5 known WR+O binaries of the SMC were investigated by Shenar et al. (2016). The available data are further discussed in section 2 of the present paper. In sections 3, 4 and 5, we will address the following two questions:\n\n\\begin{itemize}\n\\item \tWhen the RLOF is accompanied by mass transfer do massive mass gainers always spin-up until they reach their critical rotation velocity?\\\\ \n\\item In anticipation, we will show that the answer to the foregoing question is `no' and that tidal effects are probably not the (only) cause. The second question then is: Is there a physical spin-down process different from tidal effects which is of the same order as, but still less than the spin-up process during RLOF?\n \n\\end{itemize}\n\n\\section{The WR+O binaries: the spin-rates of the O components}\n\nLet us start from Table 2 of Shara et al. (2017). The table lists the Galactic WR+O binaries (11 systems) for which the rotational velocity of the O-component has been determined\\footnote{Shara et al. (2017) determined and discussed the rotation speeds by analyzing two He-lines, He I 4922 and He II 4541; here we consider both.} or for which this rotational velocity was discussed in previous work (WR11 from Baade et al. 1990, WR127 from de la Chevrotiere et al. 2011, and WR139 from Marchenko et al. 1994). The table gives the observed mass functions and inclination angle-ranges from Lamontagne et al. (1996). For the WR binaries WR11 and WR139 the inclination angle is quite accurately known and therefore so are the orbital masses\\footnote{Note that North et al. (2007) reconsidered WR11 ($\\gamma$-Velorum) and proposed very similar mass estimates as those given in Table 2 of Shara et al., 2017}. For the other systems we have further restricted the inclination angle-ranges using the spectral type of the O-type components given by Crowther (2017) and applying the mass-spectral type\/luminosity class relation proposed by Vanbeveren et al. (1998c) for the O-type component \\footnote{A mass-spectral type-luminosity class relation relies on evolutionary tracks and therefore depends on parameters whose values are uncertain to some extent; in appendix A we discuss the relation.}. This yields the mass-range estimates of Table 1. Assuming alignment of orbital and spin vectors (not always obeyed: see e.g. Villar-Sbaffi et al. 2005 \\& 2006 for the WR+O binaries CQ Cep \\& CX Cep, resp.), this restriction then obviously also restricts the possible values of the equatorial velocity. To illustrate the procedure let us consider WR31. It is a WN4o+O8V binary with Msin$^3$i values 2.7 $M_{\\odot}$+ 6.3 $M_{\\odot}$. The inclination angle-range i = 40-62$^{\\circ}$. The mass-spectral type relation predicts a (generous) 24-34 $M_{\\odot}$ mass range for an O8V star; this means that an i-value close to 40$^{\\circ}$ is to be preferred and therefore an equatorial velocity close to the maximum value given in Shara et al. (2017) should be retained, i.e. 336\/274 km\/s. This exercise then yields a set of most probable WR+O star binary properties that are given in Table 2. \n\nAs was already concluded by Shara et al. (2017) the 11 WR+O binaries in our sample have O-type components that seem to be spun up, i.e. the 11 WR+O binaries all show the signature of mass transfer \\footnote{Although WR47 looks like an outlier our conclusions related to this binary also rely on the assumption that the WN6o component is hydrogen deficient, post-RLOF and core helium burning}. Since it is expected that a common envelope process is not accompanied by mass transfer we are inclined to conclude that {\\it common envelope evolution has not played a significant role in the formation of our WR+O sample}. This also means that the WR stars in our sample of WR+O binaries are not formed by a stellar wind mass loss process only because stellar wind mass loss is not expected to be accompanied by significant mass accretion either. Note that if the 11 WR+O binaries in our sample are representative for the whole Galactic WR+O population, the foregoing conclusions may apply for this whole population.\nOne can think of two reasons why common envelope evolution may be rare among the most massive binaries: \n\na. as was shown by Sana et al. (2012) the period distribution of the most massive binaries may be skewed towards values smaller than the typical periods that are needed for a common envelope phase to happen (of the order of years) and \n\nb. once a massive star reaches the red supergiant (RSG) phase it is subject to very large stellar wind mass loss (Vanbeveren et al., 2007; Meynet et al., 2015); when a star like that is a component of a binary that would evolve through a common envelope phase if the effects of a RSG wind are ignored, the inclusion of the RSG-wind may significantly reduce the effects of the common envelope process; it may even completely suppress the common envelope phase. Note that most binary population studies (in particular those that tend to predict the merger rates of double BHs by means of isolated close binary evolution) do not account properly for the effect of RSG stellar winds and this makes the predictions very uncertain.\n\n\\section{WR+O binaries: the spin-rates of the RLOF progenitors of the O-components}\n\nSome of the WR+O binaries in our sample are very close and they may have been subject to tidal effects which would slow the rotation of the O-type star down; this means that the rotation of the O-stars may have been faster in the past. To estimate the rotation speed of the progenitors of the O-components we proceed as follows. It is generally known that the overall structure of a post-RLOF mass loser of a case B (AB) binary (given its mass) is largely independent of the details of the progenitor evolution in general, the details of the RLOF in particular. Using the Brussels binary evolutionary code we computed the helium burning evolution of a grid of post-RLOF mass losers\\footnote {For a description of our evolutionary code and the basic ingredients, see Vanbeveren et al., 1998b, c and Mennekens and Vanbeveren, 2014.}. These computations reveal that Galactic hydrogen deficient WN stars which are mostly early type WN stars (WNE stars using the nomenclature introduced by Vanbeveren and Conti, 1980) are on average 2.10$^{5}$ yrs past their progenitor RLOF; for the WC stars this is on average 3.10$^{5}$ yrs. These timescales hold for WR stars of the binaries listed in Table 2. Given a WR+O system with the parameters of Table 2 we interpolate in the grid in order to estimate how the binary looked immediately after RLOF, using the timescales given above and assuming that during these short WR-timescales the structure of the O-component does not change (a very reasonable assumption accounting for the fact that the O-component is a core hydrogen burning star evolving on a timescale that is typically 10-times longer than the WR-timescale). It is then straightforward, using a scenario that treats tidal effects for stars with a radiative envelope (we used the formalism of Zahn, 1977) to calculate backwards the rotation speed of the O star progenitor at the end of the RLOF. The results are shown in Table 2 as well (v$_{rot}$ of the O-type star at the beginning of the WR phase of the binary is of course v$_{rot}$ of the O-type component at the end of the RLOF). Note that (as is well known) tides are effective mainly in the shortest period binaries. Accounting for the fact that the critical rotation velocity of the O-type stars in our sample ranges between 600 km\/s and 800 km\/s the results lead us to conclude that\n\n\\begin{itemize}\n\\item most of the mass gainers in the progenitors of WR+O binaries are spun up by mass transfer but their rotation velocity at the end of the RLOF may be significantly smaller than the critical value. The latter is a fortiori true when the rotational velocities are considered resulting from the He II lines.\n\\end{itemize}\n\nThe foregoing conclusion also applies for the longest period WR+O binaries where the effects of tides (and their uncertainties) are small.\n\nAs a note added in proof, in a recent paper (Shenar et al., 2016) the authors have investigated the 5 known WR+O binaries in the SMC. The O-type stars seem to rotate super synchronously but also here the rotation speed may be far below critical. In particular, the binary AB8 consists of a WO4 component with an O4V companion. The O4V star has an apparent age of $\\sim$10$^{6}$ yrs (see also Appendix A) which is similar to the ages of O4V-III stars in the Magellanic Clouds determined by Massey et al. (2000). One concludes that the O4V component is rejuvenated with respect to the age of the WO4 star (3.10$^{6}$ yrs; see Shenar et al., 2016) meaning that mass transfer happened. The rotation speed of the O-type star lies in the range 130-230 km\/s. Since the binary period = 16.6 days the effects of tidal interaction are small so that the present rotation speed of the O-type star almost equals the speed at the end of the RLOF of the progenitor. Compared to the critical rotation speed (which is of the order of 700 km\/s using the system parameters tabulated by Shenar et al., 2016) we conclude that also here it is very likely that the mass gainer has been spun-up by mass transfer but at the end of RLOF its super-synchronous rotation velocity was considerably smaller than the critical speed. \n\n\\section{The WR+O binaries: why the progenitors of the O-components spin up during RLOF but may not reach the critical speed}\n\nThe conclusions of the previous section lead us to suspect that during the RLOF of the progenitors of the WR+O binaries there must be an efficient spin-down process, otherwise the O components would all be observed to rotating at or near critical speed. Since our WR+O sample reveals that all O-type stars rotate super-synchronously the magnitude of the spin-down process should be of the same order as the spin-up but it should not completely suppress the spin-up. Note that this also applies for the longer period systems where the effects of tides (and their uncertainties) are expected to be small. Let us first remind the interested reader of a previous study (Dervisoglu, Tout and Ibanoglu, 2010) where the authors investigated the spins of the mass gainers of Algol binaries. In long-period Algols (P $>$ 5 days) mass transfer happens through a disc, implying a rapid spinning-up of the mass gainer. According to the analysis of Packet (1981) the rotation speed is expected to become critical soon after the onset of the mass accretion. However, the observations of longer period Algols reveal that the mass gainers have spins typically between 10 and 40\\% of the critical rate and therefore an efficient spin-down mechanism for the gainers is required. The authors argue that radiative tides are too weak to account for sufficient spin-down and they propose `{\\it the generation of magnetic fields in the radiative atmospheres of a differentially rotating star (the Spruit-Tayler type dynamo, Spruit, 2002 and Tayler, 1973) and the possibility of angular momentum loss driven by strong magnetic stellar winds (the model of Tout and Pringle, 1992)}'. Dervisoglu et al. (2010) presented a possible mathematical formalism for this process and concluded that the gainers in Algols with orbital periods longer than $\\sim$5 days rotate below breakup, when a small amount of mass, ($\\sim$10\\% of the transferred matter) is lost by the gainer which has a rotationally induced magnetic field of the order of one or a few kG.\n\nSince the Algol-situation shows some resemblance with the WR+O binary situation (mass transfer caused the spin-up of the mass gainer but the speed of the gainer at the end of RLOF is significantly below critical), we were tempted to test if the same process could work for the massive binaries as well. We follow the formalism described by Dervisoglu et al. and compare the following two processes.\n\nWhen during RLOF mass is transferred at a rate $\\dot{M}_{acc}$ from a disc to the star (with mass M and radius R) the rate of angular momentum transferred from the disc to the outer layers of the star is\n\n\\begin{equation}\n\\frac{dJ_{acc}}{dt} = \\dot{M}_{acc}\\sqrt{GMR}\n\\end{equation}\n\n\\noindent (G is the gravitational constant). \t \n\n\tThe angular momentum lost from a magnetic star due to a wind that is corotating up to the Alfv\u00e9n surface (with radius $R_{A}$) is\n\n\\begin{equation}\n\\frac{dJ_{w}}{dt} = \\dot{M}_{w}{R_{A}}^{2}\\Omega\n\\end{equation}\n\n\\noindent with $\\dot{M_{w}}$ the wind mass loss rate and $\\Omega$ the angular velocity of the outer layers of the star. If it is assumed that the generated magnetic field (with magnetic flux density at the stellar surface $B_s$ expressed in Gauss) is a dipole field, a straightforward calculation gives (using cgs units)\n\n\n\\begin{equation}\n\\frac{dJ_{w}}{dt} = -(-\\dot{M}_{w})^{3\/7}{B_{s}}^{8\/7}(2GM)^{-2\/7}R^{24\/7}\\Omega\n\\end{equation}\n\n\tDervisoglu et al. (2010) argued that a fully efficient Spruit-Tayler dynamo can generate a dipole field of the order of one or a few kG and it can support a wind with a rate up to 0.01 $M_{\\odot}$\/yr. This rate is comparable to the mass loss rate of the mass loser during the RLOF in intermediate-mass close binaries (forming the Algols) and in massive close binaries (forming the WR+O systems). If $\\dot{M}_{RLOF}$ is the mass loss rate of the mass loser during its RLOF it is useful to write $\\dot{M}_{acc}=-\\beta\\dot{M}_{RLOF}$ \nand \n$\\dot{M}_{w}=(1-\\beta)\\dot{M}_{RLOF}$\n\n\nNote that the Spruit-Tayler dynamo operates as long as the star is differentially rotating, i.e. as long as mass transfer happens. If mass transfer stops the star is expected to regain a state of uniform rotation and the magnetic field may vanish. Post-RLOF binaries or case A binaries \\footnote{Case A binaries are very short period binaries where RLOF starts while the mass loser is a core hydrogen burning star; case A RLOF starts with a rapid mass transfer phase on the thermal timescale of the loser, followed by a slow phase acting on the nuclear timescale of the loser, i.e. the slow phase lasts almost the entire remaining core hydrogen burning phase.} during the slow phase of RLOF may therefore not show the remains of such fields. We will come back to this later. \n\nTo illustrate whether or not the magnetically induced spin-down (formula 3) can compensate for the spin-up (formula 1) let us consider the evolution of a 30 $M_{\\odot}$ + 20 $M_{\\odot}$ zero age main sequence binary with a period assuring a case B type RLOF (we use the computations of de Loore and Vanbeveren, 1995). The computations reveal that during the rapid (resp. slow) phase of the RLOF the mass loss rate of the loser ($\\dot{M}_{RLOF}$) typically equals 5.10$^{-3}$ $M_{\\odot}$\/yr (resp. 10$^{-3}$ $M_{\\odot}$\/yr). Tables 3 and 4 show the ratio dJ+\/dJ- = $\\vert$expression (1)\/expression (3)$\\vert$ for the two $\\dot{M}_{RLOF}$ values, for different values of the magnetic field strength and different values of $\\beta$. Note that a ratio smaller than 1 corresponds to a situation where rapid spin-up is compensated. The results illustrate the following conclusion\n\n\\begin{itemize}\n\\item\nWhen during the mass transfer phase of a massive binary the Spruit-Taylor dynamo can generate a magnetic field of the order of a few kG then rapid spin-up of the mass gainer can be slowed down at the expense of a moderate mass loss from the binary.\n\\end{itemize}\n\nA more detailed calculation of this effect would require the implementation of rotation and magnetic fields in the stellar structure equations (as was done by Petrovic et al., 2005b, see also Potter et al., 2012) but this is beyond the scope of the present paper.\n\n\\section{Observational signature of magnetic fields in the O-type components of massive close binaries}\n\nPlaskett's star (HD 47129) is a massive binary (O8I+O7.5III) with a 14.4-day orbital period (Linder et al., 2008; Mahy et al., 2011). When accounting for the inclination angle range i = 69.3$^\\circ$-72.7$^\\circ$ proposed by Bagnuolo et al. (1992) both components have a mass between 50 $M_{\\odot}$ and 60 $M_{\\odot}$. The secondary is a fast rotator (v$_{rot}$sini $\\sim$300 km\/s) and it was argued by Linder et al. (2008) that the secondary may rather be an O7V star showing an apparent O7.5III spectrum because of its rapid rotation. Compared to the primary star the secondary is then rejuvenated indicating that mass transfer has happened and that the primary is on its way to become a WN star. It is tempting to conclude that the secondary was spun up by mass transfer but since the critical rotation speed of an O7V star is about 800 km\/s, the mass gainer did not reach this critical limit (note that tides in a binary with a 14.4-day period are not expected to be very effective so that the presently observed rotation speed almost equals the speed at the end of the previous RLOF), and all this is much like in many WR+O binaries in our sample. Most interestingly, a strong magnetic field was detected in the secondary (B $>$ 2.85 kG, Grunhut et al., 2013) and this makes Plaskett's star a strong candidate for the Spruit-Tayler dynamo scenario for the generation of kG-like magnetic fields in RLOF\/mass transferring binaries (see also Langer, 2014). The results of Tables 3 and 4 then illustrate that this magnetic field may be responsible for sufficient spin-down explaining why the mass gainer did not reach the critical speed.\n\nNaze et al. (2017) observed a sample of 15 interacting and post-interaction O-type binaries and, very interestingly, they found no indication of a magnetic field in any of the 15 stars and the authors conclude that binary interactions do not systematically trigger stable, strong magnetic fields in such systems. Most of the binaries in the sample of Naze et al. (2017) are very short period systems which makes them case A binary candidates during the slow phase of mass transfer. During the slow phase in a case A binary, rapid spin-up of the gainer is not expected to happen so that a Spruit-Tayler dynamo may not be operational. The results of Naze et al. support this view. Note that most of the binaries in the sample of Naze et al. (2017) are distinctly different from Plaskett's star which, according to its period, is most likely a case B binary with a mass transfer history that is quite different from that in case A binaries. \n\n\\section{Conclusions}\n\nThe rotational velocities of the O-type components in 8 WR+O binaries have been measured by Shara et al (2017) using SALT; 3 measurements of the O-type component of Galactic WR-binaries and at least 1 in the SMC were available in the literature. All 12 rotate super-synchronously and we consider this as strong evidence that they were spun up during the RLOF\/mass transfer of the progenitor binary. We conclude that common envelope evolution (meaning no mass transfer and thus no spin-up) has not played an important role in the formation of these 12 binaries, and generalizing, that common envelope evolution does not happen frequently in the massive binaries that produce WR+O systems. We implemented the formalism describing the effects of tides in stars with a radiative envelope in our binary evolutionary code and we computed the rotation speed of the O-type stars at the end of the RLOF of the progenitor binary. Many of them rotate significantly more slowly than critical. As an explanation we propose a model that has been worked out for long period Algols (period $>$ 5 days) where the mass gainers do not rotate at the critical speed either, i.e., - mass transfer causes the mass gainer to rotate differentially - differential rotation makes the Spruit-Tayler dynamo operational and a magnetic field is generated - when the magnetic field is of the order of a few kG a magnetic wind sets in which the mass loss rate is of the order of the mass transfer rate of the mass loser - the combination of wind and magnetic field spins the star down at a rate which is comparable to the mass accretion induced spin-up. In Plaskett's star the mass gainer has been spun-up but also here the critical speed was not reached. The star has an observed magnetic field B $>$ 2.85 kG and is therefore an observational test bed for the spin-up-spin-down scenario outlined above.\n\n\\begin{table*}\n\\caption{The mass-range of the O-type components in WR+O binaries resulting from the observed spectral type.}\n\\label{table1}\n\\centering\n\\begin{tabular}{c c c}\n\\hline\nSystem & Sp. Type & Mass estimate of O-star ($M_{\\odot}$) \\\\\n\\hline\nWR21 & WN5o+O4-5 & 37-60 \\\\\nWR30 & WC6+O6-8 & 24-50 \\\\\nWR31 & WN4o+O8V & 24-34 \\\\\nWR42 & WC7+O7V & 30-37 \\\\\nWR47 & WN6o+O5V & 37-70 \\\\\nWR79 & WC7+O5-8 & 24-60 \\\\\nWR97 & WN5b+O7 & $>$30 \\\\\nWR113 & WC8d+O8-9IV & 20-30 \\\\\nWR127 & WN5o+O8.5V & 17-24 \\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\\begin{table*}\n\\caption{Most probable masses and rotation speed of the O-type component of the 11 WR+O binaries in our sample. Column 7 lists the rotation velocities at the end of the RLOF (= the beginning of the WR phase) of the progenitor binary calculated as explained in the text. The v$_{rot}$ column 6 lists the speed values resulting from the He I\/He II line measurements as explained in Shara et al. (2017). A question mark means that this particular measurement is not available.}\n\\label{table4}\n\\centering\n\\begin{tabular}{c c c c c c c}\n\\hline\nSystem & Sp. Type & P (days) & WR mass & O-mass & v$_{rot}$ (km\/s) & v$_{rot}$ (km\/s) \\\\\n & & & & & & begin WR \\\\\n\\hline\nWR21 & WN5o+O4-5 & 8.3 & $>$19 & $>$37 & 331\/138 & 415\/173 \\\\\nWR30 & WC6+O6-8 & 18.8 & 16.4 & 34 & ?\/$>$178 & ?\/$>$205 \\\\\nWR31 & WN4o+O8V & 4.8 & $>$11 & $>$24 & 336\/274 & 493\/402 \\\\\nWR42 & WC7+O7V & 7.9 & 16 & 27 & 500-511\/170-177 & 620-630\/211-218 \\\\\nWR47 & WN6o+O5V & 6.2 & 52 & 60 & ?\/$>$88 & ?\/$>$190 \\\\\nWR79 & WC7+O5-8 & 8.9 & $>$9.5 & $>$24 & ?\/$>$174 & ?\/$>$250 \\\\\nWR97 & WN5b+O7 & 12.6 & 18 & 30 & 470\/279 & 502\/298 \\\\\nWR113 & WC8d+O8-9IV & 29.7 & $>$11 & $>$22 & 310-330\/170-181 & 320-340\/175-186 \\\\\nWR11 & WC8+O7.5III-V & 78.5 & 9.6 & 30 & 220\/? & 232\/? \\\\\nWR127 & WN5o+O8.5V & 9.5 & $>$9 & $>$20 & 300-366\/? & 350-416\/? \\\\\nWR139 & WN5o+O6III-V & 4.2 & 9.4 & 28 & 215\/? & 365\/? \\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\\begin{table*}\n\\caption{dJ+\/dJ- as function of $\\beta$ and $B_s$ for $\\dot{M}_{RLOF}$ = 10$^{-3}$ $M_{\\odot}$\/yr}\n\\label{table5}\n\\centering\n\\begin{tabular}{c c c c c c}\n\\hline\n$\\beta$ & $B_s$ = 500 G & $B_s$ = 1 kG & $B_s$ = 2 kG & $B_s$ = 3 kG & $B_s$ = 4 kG \\\\\n\\hline\n0.9 & 9 & 4.1 & 1.8 & 1.2 & 0.8\\\\\n0.8 & 6 & 2.7 & 1.2 & 0.8 & 0.6\\\\\n0.6 & 3 & 1.5 & 0.7 & 0.4 & 0.3\\\\\n0.4 & 1.9 & 0.8 & 0.4 & 0.2 & 0.2\\\\\n0.2 & 0.8 & 0.4 & 0.2 & 0.1 & 0.1\\\\\n0.1 & 0.4 & 0.2 & 0.1 & 0.05 & 0.04\\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\n\\begin{table*}\n\\caption{dJ+\/dJ- as function of $\\beta$ and $B_s$ for $\\dot{M}_{RLOF}$ = 5.10$^{-3}$ $M_{\\odot}$\/yr}\n\\label{table4}\n\\centering\n\\begin{tabular}{c c c c c c}\n\\hline\n$\\beta$ & $B_s$ = 500 G & $B_s$ = 1 kG & $B_s$ = 2 kG & $B_s$ = 3 kG & $B_s$ = 4 kG \\\\\n\\hline\n0.9 & 22.5 & 10.2 & 4.6 & 2.9 & 2.1\\\\\n0.8 & 15 & 6.7 & 3 & 1.9 & 1.4\\\\\n0.6 & 8.3 & 3.8 & 1.7 & 1.1 & 0.8\\\\\n0.4 & 4.6 & 2.1 & 1 & 0.6 & 0.4\\\\\n0.2 & 2.1 & 0.9 & 0.4 & 0.3 & 0.2\\\\\n0.1 & 1 & 0.4 & 0.2 & 0.1 & 0.05\\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Related Work}\n\n\tIn this section, we provide a brief overview of some related work in inertial navigation systems, pedestrian dead reckoning (PDR) and sequential deep learning. \n \n \\textbf{Strapdown Inertial Navigation System}: strapdown inertial navigation systems (SINS) have been studied for decades \\cite{Savage1998}. Previous inertial systems heavily relied on expensive, heavy, high-precision inertial measurement units, hence their main application had to be constrained on moving vehicles, such as automobiles, ships, aircraft, submarines and spacecraft. Recent advances of MEMS technology enable low-cost MEMS IMU to be deployed on robotics, UAV \\cite{Bloesch2015}, and mobile devices \\cite{Lymberopoulos2015}. However, restricted by size and cost, the accuracy of a MEMS IMU is extremely limited, and has to be integrated with other sensors, such as visual inertial odometry \\cite{Leutenegger2015}. Another solution is to attach an IMU on the user's foot in order to take advantage of heel strikes for zero-velocity update to compensate system error drifts \\cite{Skog2010}. These inconveniences prevent wide adoption of inertial solutions on consumer grade devices \\cite{Harle2013}. \n \n \\textbf{Pedestrian Dead Reckoning}: Unlike SINS's open-loop integration of inertial sensors, PDR uses inertial measurements to detect steps, estimate stride length and heading via an empirical formula \\cite{Shu2015a}. System errors still quickly accumulate, because of incorrect step displacement segmentation and inaccurate stride estimation. In addition, a large number of parameters have to be carefully tuned according to a user's walking habits. Recent research mainly focused on fusing PDR with external references, such as a floor plan \\cite{Xiao2014a}, WiFi fingerprinting \\cite{Hilsenbeck2014} or ambient magnetic fields \\cite{Wang2016a}, still leaving fundamental problem of rapid drift unsolved. Compared with prior work, we abandon step-based approach and present a new general framework for inertial odometry. This allows us to handle more general tracking problems, including trolley\/wheeled configurations, which step-based PDR cannot address.\n \n \\textbf{Sequential Deep Learning}: \n\tDeep learning approaches have recently shown excellent performance in handling sequential data, such as speech recognition \\cite{Graves2014}, machine translation \\cite{Dai2015}, visual tracking \\cite{Ondruska2016} and video description \\cite{Donahue2015}. To the best of our knowledge, our IONet is the first neural network framework to achieve inertial odometry using inertial data only. Previous learning-based work has tackled localization problems, by employing visual odometry \\cite{Zhou2017,Wang2017,Clark2017} and visual inertial odometry \\cite{Clark2017a}. Some other work has concentrated on learning intuitive physics \\cite{Hooman2017}, modeling state space models \\cite{Karl2016}, and supervising neural networks via physics knowledge \\cite{Stewart2017}. While most of them use visual observations, our work exploits real world sensor measurements to learn high-level motion trajectories.\n\n\\section{The Curse of Inertial Tracking}\n\nThe principles of inertial navigation are based on Newtonian mechanics. They allow tracking the position and orientation of an object in a navigation frame given an initial pose and measurements from accelerometers and gyroscopes.\n\n\t\\begin{figure}\n \t\\centering\n \\includegraphics[width=0.48\\textwidth]{existing_methods.pdf}\n \\caption{\\label{fig:existing_methods} Architecture of existing methods: SINS and PDR}\n \\end{figure} \n\nFigure \\ref{fig:existing_methods} illustrates the basic mechanism of inertial navigation algorithms. The three-axis gyroscope measures angular velocities of the body frame with respect to the navigation frame, which are integrated into pose attitudes in Equations (\\ref{eq:att_update1}-\\ref{eq:att_update3}). To represent the orientation, the direction cosine $\\mathbf{C}_b^n$ matrix is used to represent the transformation from the body (b) frame to the navigation (n) frame, and is updated with a relative rotation matrix $\\mathbf{\\Omega}(t)$. The 3-axis accelerometer measures proper acceleration vectors in the body frame, which are first transformed to the navigation frame and then integrated into velocity, discarding the contribution of gravity forces $\\mathbf{g}$ in Equation (\\ref{eq:vel_update}). The locations are updated by integrating velocity in Equation (\\ref{eq:loc_update}). Equations (\\ref{eq:att_update1}-\\ref{eq:loc_update}) describe the attitude, velocity and location update at any time stamp $t$. In our application scenarios, the effects of earth rotation and the Coriolis accelerations are ignored. \n\n\tAttitude Update:\n\t\\begin{equation}\t\n \t\\label{eq:att_update1}\n \t\\mathbf{C}_b^n(t)=\\mathbf{C}_b^n(t-1)*\\mathbf{\\Omega}(t) \\\\ \n \\end{equation}\n \\begin{equation}\n \t\\label{eq:att_update2}\n \t\\mathbf{\\sigma}=\\mathbf{w}(t)dt\n \\end{equation}\n\t\\begin{equation}\n \t\\label{eq:att_update3}\n\t\t\\mathbf{\\Omega}(t)=\\mathbf{C}_{b_t}^{b_{t-1}}=I+\\frac{sin(\\sigma)}{\\sigma}[\\mathbf{\\sigma}\\times]+\\frac{1-cos(\\sigma)}{\\sigma^2}[\\mathbf{\\sigma}\\times]^2\n\t\\end{equation}\n \n\tVelocity Update:\n\t\\begin{equation}\n \t\\label{eq:vel_update}\n \t\\mathbf{v}(t)=\\mathbf{v}(t-1)+((\\mathbf{C}_b^n(t-1))*\\mathbf{a}(t)-\\mathbf{g}_n)dt \n\t\\end{equation}\n\n\tLocation Update:\n\t\\begin{equation}\n \t\\label{eq:loc_update}\n \t\\mathbf{L}(t)=\\mathbf{L}(t-1)+\\mathbf{v}(t-1)dt \n \\end{equation}\nwhere $\\mathbf{a}$ and $\\mathbf{w}$ are accelerations and angular velocities in body frame measured by IMU, $\\mathbf{v}$ and $\\mathbf{L}$ are velocities and locations in navigation frame, and $\\mathbf{g}$ is gravity.\n \n \tUnder ideal condition, SINS sensors and algorithms can estimate system states for all future times. High-precision INS in military applications (aviation and marine\/submarine) uses highly accurate and costly sensors to keep measurement errors very small. Meanwhile, they require a time-consuming system initialization including sensor calibration and orientation initialization. However, these requirements are inappropriate for pedestrian tracking. Realizing a SINS mechanism on low-cost MEMS IMU platform suffers from the following two problems: \n \n\\begin{itemize}\n\\item The measurements from IMUs embedded in consumer phones are corrupted with various error sources, such as scale factor, axis misalignment, thermo-mechanical white noise and random walking noise \\cite{NaserEl-SheimyHaiyingHou2008}. From attitude update to location update, the INS algorithm sees a triple integration from raw data to locations. Even a tiny noise will be highly exaggerated through this open-loop integration, causing the whole system to collapse within seconds.\n\\item A time-consuming initialization process is not suitable for everyday usage, especially for orientation initialization. Even small orientation errors would lead to the incorrect projection of the gravity vector. For example, a 1 degree attitude error will cause an additional 0.1712 $m\/s^2$ acceleration on the horizontal plane, leading to 1.7 m\/s velocity error and 8.56 m location error within 10 seconds.\n\\end{itemize}\n\nEven if we view this physical model as a state-space-model, and apply a deep neural network directly to model inertial motion, the intrinsic problems would still `curse' the entire system.\n\n\\section{Tracking Down A Cure} \n\tTo address the problems of error propagation, our novel insight is to break the cycle of continuous integration, and segment inertial data into independent windows. This is analogous to resetting an integrator to prevent windup in classical control theory \\cite{Hippe2006}.\n \n However, windowed inertial data is not independent, as Equations (\\ref{eq:att_update1}-\\ref{eq:loc_update}) clearly demonstrate. This is because key states (namely attitude, velocity and location) are \\textit{unobservable} - they have to be derived from previous system states and inertial measurements, and propagated across time. Unfortunately, errors are also propagated across time, cursing inertial odometry. It is clearly impossible for windows to be truly independent. However, we can aim for pseudo-independence, where we estimate the \\textit{change} in navigation state over each window.\nOur problem then becomes how to constrain or estimate these unobservable states over a window. Following this idea, we derive a novel sequence-based physical model from basic Newtonian Laws of Motion, and reformulate it into a learning model.\n \n The unobservable or latent system states of an inertial system consist of orientation $\\mathbf{C}_b^n$, velocity $\\mathbf{v}$ and position $\\mathbf{L}$. In a traditional model, the transformation of system states could be expressed as a transfer function\/state space model between two time frames in Equation (\\ref{eq:transfer_function}), and the system states are directly coupled with each other. \n \\begin{equation}\n \t\\label{eq:transfer_function}\n \t\\begin{matrix}\n \t[\\mathbf{C}_b^n & \\mathbf{v} & \\mathbf{L}]\n \\end{matrix}_t = f(\\begin{matrix} [\\mathbf{C}_b^n & \\mathbf{v} & \\mathbf{L}] \\end{matrix}_{t-1}, \\begin{matrix} [\\mathbf{a} & \\mathbf{w}] \\end{matrix}_t)\n \\end{equation}\n \nWe first consider displacement. To separate the displacement of a window from the prior window, we compute the change in displacement $\\Delta \\mathbf{L}$ over an independent window of $n$ time samples, which is simply:\n\t\\begin{equation}\n \t\\label{eq:displacement1}\n\t\t\\Delta \\mathbf{L} =\\int_{t=0}^{n-1} \\mathbf{v}(t) dt\n\t\\end{equation}\n\nWe can separate this out into a contribution from the initial velocity state, and the accelerations in the navigation frame:\n\t\\begin{equation}\n \t\\label{eq:displacement2}\n\t\t\\Delta \\mathbf{L} =n\\mathbf{v}(0)dt+[(n-1)\\mathbf{s}_1+(n-2)\\mathbf{s}_2+\\dotsi+\\mathbf{s}_{n-1}]dt^2\n\t\\end{equation}\nwhere \\begin{equation}\n \t\\mathbf{s}(t)=\\mathbf{C}_b^n(t-1)\\mathbf{a}(t)-\\mathbf{g}\n \\end{equation} is the acceleration in the navigation frame, comprising a dynamic part and a constant part due to gravity. \n\nThen, Equation (\\ref{eq:displacement2}) is further formulated as:\n\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\\Delta \\mathbf{L} &=n\\mathbf{v}(0)dt+[(n-1)\\mathbf{C}_b^n(0)*\\mathbf{a}_1+(n-2)\\mathbf{C}_b^n(0)\\mathbf{\\Omega}(1)\\\\\n &*\\mathbf{a}_2+\\dotsi +\\mathbf{C}_b^n(0)\\displaystyle\\prod_{i=1}^{n-2} \\mathbf{\\Omega}(i)*\\mathbf{a}_{n-1}]dt^2-\\frac{n(n-1)}{2}\\mathbf{g}dt^2\\\\\n\t\t\\end{split}\n\t\\end{equation}\nand simplified to become:\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\Delta \\mathbf{L} =n\\mathbf{v}(0)dt+ \\mathbf{C}_b^n(0) \\mathbf{T} dt^2-\\frac{n(n-1)}{2}\\mathbf{g}dt^2 \\\\\n\t\t\\end{split}\n\t\\end{equation}\nwhere \\begin{equation}\n \t\\mathbf{T}=(n-1)\\mathbf{a}_1+(n-2)\\mathbf{\\Omega}(1)\\mathbf{a}_2+\\dotsi+\\displaystyle\\prod_{i=1}^{n-2} \\mathbf{\\Omega}(i)\\mathbf{a}_{n-1}\n \\end{equation}\n\nIn our work, we consider the problem of indoor positioning i.e. tracking objects and people on a horizontal plane. This introduces a key observation: in the navigation frame, there is no long-term change in height\\footnote{This assumption can be relaxed through the use of additional sensor modalities such as a barometer to detect changes in floor level due to stairs or elevator.}.\nThe mean displacement in the z axis over a window is assumed to be zero and thus can be removed from the formulation. We can compute the absolute change in distance over a window as the L-2 norm i.e. $\\Delta l=\\|\\Delta \\mathbf{L} \\|_2$, effectively decoupling the distance traveled from the orientation (e.g. heading angle) traveled, leading to:\n\t\\begin{equation}\n \t\\begin{split}\n \t&\\Delta l=\\| n\\mathbf{v}(0)dt+ \\mathbf{C}_b^n(0) \\mathbf{T} dt^2-\\frac{n(n-1)}{2}\\mathbf{g}dt^2\\|_2 \\\\\n &=\\|\\mathbf{C}_b^n(0)(n\\mathbf{v}^b(0)dt+ \\mathbf{T} dt^2-\\frac{n(n-1)}{2}\\mathbf{g}_0^b dt^2)\\|_2\n \\end{split}\n \\end{equation}\n\nBecause the rotation matrix $\\mathbf{C}_b^n(0)$ is an orthogonal matrix i.e. $\\mathbf{C}_b^n(0)^T\\mathbf{C}_b^n(0)=\\mathbf{I}$, the initial unknown orientation has been successfully removed from, giving us: \n \\begin{equation}\n \t\\begin{split}\n \t\t\\Delta l =\\|\\Delta \\mathbf{L} \\|_2&=\\| n\\mathbf{v}^b(0)dt+ \\mathbf{T} dt^2-\\frac{n(n-1)}{2}\\mathbf{g}_0^b dt^2\\|_2 \\\\\t\t\t\n \\end{split}\n \\end{equation} \nHence, over a window, the horizontal distance traveled can be expressed as a function of the initial velocity, the gravity, and the linear and angular acceleration, all in the body frame:\n\t\\begin{equation}\n\t\t\\Delta l=f(\\mathbf{v}^b(0), \\mathbf{g}_0^b, \\mathbf{a}_{1:n}, \\mathbf{w}_{1:n})\n\t\\end{equation}\n\nTo determine the change in the user's heading, we consider that a user's real accelerations and angular rates $(\\mathbf{a}_{1:n},\\mathbf{w}_{1:n})$ are also latent variables of IMU raw measurements $(\\mathbf{\\hat{a}}_{1:n},\\mathbf{\\hat{w}}_{1:n})$, and on the horizontal plane, only the heading attitude is essential in our system. From Equations (2-3) the change in the heading $\\Delta \\psi$ is expressed as a function of the raw data sequence. Therefore, we succeed in reformulating traditional model as a polar vector $(\\Delta l, \\Delta \\psi)$ based model, which is only dependent on inertial sensor data, the initial velocity and gravity in the body frame: \n \\begin{equation}\n \t\\label{eq:polar_vector}\n \t\t(\\Delta l, \\Delta \\psi)=f_{\\theta}(\\mathbf{v}^b(0), \\mathbf{g}_0^b, \\mathbf{\\hat{a}}_{1:n}, \\mathbf{\\hat{w}}_{1:n})\n \\end{equation}\n \nTo derive a global location, the starting location $(x_0, y_0)$ and heading $\\psi_0$ and the Cartesian projection of a number of windows can be written as, \n \\begin{equation}\n \t\\left\\{\n \t\\begin{aligned}\n \t\tx_n=x_0+\\Delta l cos(\\psi_0+\\Delta \\psi) \\\\\n \ty_n=y_0+\\Delta l sin(\\psi_0+\\Delta \\psi)\n \\end{aligned}\n \\right.\n \\end{equation}\n \n\nOur task now becomes how to implicitly estimate this initial velocity and the gravity in body frame, by casting each window as a sequence learning problem.\n\nThis section serves as an overview showing the transition from the traditional model-based method to the proposed neural-network-based method. It takes the traditional state-space-model described in Equations (1-5), which converts raw data to poses in a step-by-step manner, to a formulation where a window of raw inertial data is processed in a batch to estimate a displacement and an angle change. Note that in both formulations, the final output depends on the initial attitude and velocity. As a result, in both cases, the curse of error accumulation won't be avoided if using the model-based integration approach. However, our sequence based formulation paves the way to our proposed neural network approach. \n\n\n\\section{Deep Neural Network Framework}\nEstimating the initial velocity and the gravity in the body frame explicitly using traditional techniques is an extremely challenging problem. Rather than trying to determine the two terms, we instead treat Equation (\\ref{eq:polar_vector}) as a sequence, where the inputs are the observed sensor data and the output is the polar vector. The unobservable terms simply become latent states of the estimation. Intuitively, the motivation for this relies on the regular and constrained nature of pedestrian motion. Over a window, which could be a few seconds long, a person walking at a certain rate induces a roughly sinusoidal acceleration pattern. The frequency of this sinusoid relates to the walking speed. In addition, biomechanical measurements of human motion show that as people walk faster, their strides lengthen \\cite{Hausdorff2007}. Moreover, the gravity in body frame is related to the initial yaw and roll angle, determined by the attachment\/placement of the device, which can be estimated from the raw data \\cite{Xiao2015a}. In this paper, we propose the use of deep neural networks to learn the relationship between raw acceleration data and the polar delta vector, as illustrated in Figure \\ref{fig:overview}.\n\nInput data are independent windows of consecutive IMU measurements, strongly temporal dependent, representing body motions. To recover latent connections between motion characteristics and data features, a deep recurrent neural network (RNN) is capable of exploiting these temporal dependencies by maintaining hidden states over the duration of a window. Note however that latent states are not propagated between windows. Effectively, the neural network acts as a function $f_{\\theta}$ that maps sensor measurements to polar displacement over a window\n\t\\begin{equation}\n\t\t(\\mathbf{a}, \\mathbf{w})_{200*6} \\xrightarrow{f_{\\theta}} (\\Delta l, \\Delta \\psi)_{1*2},\n\t\\end{equation}\nwhere a window length of 200 frames (2~s) is used here\\footnote{We experimented with a window size of 50, 100, 200 and 400 frames, and selected 200 as an optimal parameter regarding the trade-off between accumulative location error and predicted loss.}.\n \\begin{figure}\n \t\\centering\n \\includegraphics[width=0.5\\textwidth]{overview_new.pdf}\n \\caption{\\label{fig:overview}Overview of IONet framework}\n \\end{figure}\n\n\t\\begin{figure*}\n \t\\centering\n \\begin{subfigure}[t]{0.3\\textwidth}\n \t\\includegraphics[width=\\textwidth]{handheld_users.pdf}\n \t\\caption{\\label{fig:hand_users} Handheld}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.3\\textwidth}\n \t\\includegraphics[width=\\textwidth]{pocket_users.pdf}\n \t\\caption{\\label{fig:pocket_users} In Pocket}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.3\\textwidth}\n \t\\includegraphics[width=\\textwidth]{handbag_users.pdf}\n \t\\caption{\\label{fig:handbag_users} In Handbag}\n \\end{subfigure}\n \\caption{\\label{fig:multi_users} Performance in experiments involving different users.}\n \\end{figure*}\n\n\t\\begin{figure*}\n \t\\centering\n \\begin{subfigure}[t]{0.3\\textwidth}\n \t\\includegraphics[width=\\textwidth]{handheld_phones.pdf}\n \t\\caption{\\label{fig:hand_phones} Handheld}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.3\\textwidth}\n \t\\includegraphics[width=\\textwidth]{pocket_phones.pdf}\n \t\\caption{\\label{fig:pocket_phones} In Pocket}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.3\\textwidth}\n \t\\includegraphics[width=\\textwidth]{handbag_phones.pdf}\n \t\\caption{\\label{fig:handbag_phones} Handbag}\n \\end{subfigure}\n \\caption{\\label{fig:multi_phones} Performance in experiments involving different devices.}\n \\end{figure*}\n\n\t\\begin{figure*}\n \t\\centering\n \\begin{subfigure}[t]{0.3\\textwidth}\n \t\\includegraphics[width=\\textwidth]{floor1_handheld.pdf}\n \t\\caption{\\label{fig:floor1_hand_phones} Handheld}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.3\\textwidth}\n \t\\includegraphics[width=\\textwidth]{floor1_pocket.pdf}\n \t\\caption{\\label{fig:floor1_pocket_phones} In Pocket}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.3\\textwidth}\n \t\\includegraphics[width=\\textwidth]{floor1_handbag.pdf}\n \t\\caption{\\label{fig:floor1_handbag_phones} In Handbag}\n \\end{subfigure}\n \\caption{\\label{fig:floor1} Trajectories on Floor A}\n \\end{figure*}\n \n \\begin{figure*}\n \t\\centering\n \\begin{subfigure}[t]{0.3\\textwidth}\n \t\\includegraphics[width=\\textwidth]{floor4_handheld.pdf}\n \t\\caption{\\label{fig:floor4_hand_phones} Handheld}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.3\\textwidth}\n \t\\includegraphics[width=\\textwidth]{floor4_pocket.pdf}\n \t\\caption{\\label{fig:floor4_pocket_phones} In Pocket}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.3\\textwidth}\n \t\\includegraphics[width=\\textwidth]{floor4_handbag.pdf}\n \t\\caption{\\label{fig:floor4_handbag_phones} In Handbag}\n \\end{subfigure}\n \\caption{\\label{fig:floor4} Trajectories on Floor B}\n \\end{figure*}\n\n\tIn the physical model, orientation transformations impact all subsequent outputs. We adopt Long Short-Term Memory (LSTM) to handle the exploding and vanishing problems of vanilla RNN, as it has a much better ability to exploit the long-term dependencies \\cite{Greff2016}. In addition, as both previous and future frames are crucial in updating the current frame a bidirectional architecture is adopted to exploit dynamic context.\n \n\tEquation (\\ref{eq:polar_vector}) shows that modeling the final polar vector requires modeling some intermediate latent variables, e.g. initial velocity and gravity. Therefore, to build up higher representation of IMU data, it is reasonable to stack 2-layer LSTMs on top of each other, with the output sequences of the first layer supplying the input sequences of the second layer. The second LSTM outputs one polar vector to represent the transformation relation in the processed sequence. Each layer has 96 hidden nodes. To increase the output data rate of polar vectors and locations, IMU measurements are divided into independent windows with a stride of 10 frames (0.1s).\n \n\tThe optimal parameter $\\mathbf{\\theta}^*$ inside the proposed deep RNN architecture can be recovered by minimizing a loss function on the training dataset $\\mathbf{D}=(\\mathbf{X},\\mathbf{Y})$. \n \\begin{equation}\n \t\\theta^*= \\operatorname*{arg\\, min}_{\\theta} \\ell(f_{\\theta}(\\mathbf{X}),\\mathbf{Y})\n \\end{equation}\n\n\tThe loss function is defined as the sum of Euclidean distances between the ground truth $(\\Delta \\tilde{l}, \\Delta \\tilde{\\psi})$ and estimated value $(\\Delta l, \\Delta \\psi)$. \n\t\\begin{equation}\n \t\\ell=\\sum \\|\\Delta \\tilde{l} - \\Delta l \\|_2^2+\\kappa \\|\\Delta \\tilde{\\psi} - \\Delta \\psi \\|_2^2\n \\end{equation}\n\twhere $\\kappa$ is a factor to regulate the weights of $\\Delta l$ and $\\Delta \\psi$.\n\n\t\\begin{figure}\n \t\\centering\n \\includegraphics[width=0.5\\textwidth]{loss.pdf}\n \\caption{\\label{fig:loss}Loss of adopting various frameworks}\n \\end{figure}\n\n\t \\begin{figure*}\n \t\\centering\n \\begin{subfigure}[t]{0.32\\textwidth}\n \t\\includegraphics[width=\\textwidth]{traj_vicon.pdf}\n \t\\caption{\\label{fig:traj_vicon} Ground Truth}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.32\\textwidth}\n \t\\includegraphics[width=\\textwidth]{traj_ionet.pdf}\n \t\\caption{\\label{fig:traj_ionet} IONet}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.32\\textwidth}\n \t\\includegraphics[width=\\textwidth]{traj_tango.pdf}\n \t\\caption{\\label{fig:traj_tango} Tango}\n \\end{subfigure}\n \\caption{\\label{fig:traj_trolley} Trolley tracking trajectories of (a) Ground Truth (b) IONet (c) Tango}\n \\end{figure*} \n \n\\section{Experiments}\n\n\\subsection{Training Details}\n\n\t\\textbf{Dataset}: There are no public datasets for indoor localization using phone-based IMU. We collected data ourselves with pedestrian walking inside a room installed with an optical motion capture system (Vicon) \\cite{Vicon2017}, providing very high-precise full pose reference (0.01m for location, 0.1 degree for orientation) for our experimental device. The training dataset consists of IMU data from the phone in different attachments, e.g. hand-held, in pocket, in handbag, on trolley each for 2 hours, collected by a very common consumer phone iPhone 7Plus. Note that all of our training data was collected by User 1 carrying iPhone 7. To test our model's ability to generalize, we invited 3 new participants and evaluated on another two phones, i.e. iPhone 6 and iPhone 5. \n \n \\textbf{Training}: We implemented our model on the public available TensorFlow framework, and ran training process on a NVIDIA TITAN X GPU. During training, we used Adam, a first-order gradient-based optimizer \\cite{Kingma2014} with a learning rate of 0.0015. The training converges typically after 100 iterations. To prevent our neural networks from overfitting, we gathered data with abundant moving characteristics, and adopted Dropout \\cite{Srivastava2014} in each LSTM layer, randomly dropping 25\\% units from neural networks during training. This method significantly reduces overfitting, and proves to perform well on new users, devices and environments. \n\n\t\\textbf{Comparison with Other DNN Frameworks}: To evaluate our assumption of adopting a 2-layer Bidirectional LSTM for polar vector regression, we compare its validation results with various other DNN frameworks, including frameworks using vanilla RNN, vanilla Convolution Neural Network, 1-layer LSTM and 2-layer LSTM without Bi-direction. The training data are from all attachments. Figure \\ref{fig:loss} shows their validation loss lines. Our proposed framework with 2-layer Bi-LSTM descends more steeply, and stays lower and more smoothly during the training than all other neural networks, supporting our assumption, while vanilla RNN is stuck in vanishing problems, and CNN doesn't seem to capture temporary dependencies well. \n \n \\textbf{Testing:} We also found that a separate training on every attachment shows better performance in prediction than training jointly, hence we implemented the prediction model of 2-layer Bi-LSTM trained on separate attachments in our following experiments. In a practical deployment, existing techniques can be adopted to recognize different attachments from pure IMU measurements \\cite{Brajdic2013}, providing the ability to dynamically switch between trained models.\n\n\t\\textbf{Baselines}: Two traditional methods are selected as baselines, pedestrian dead reckoning (PDR) and strapdown inertial navigation system (SINS) mechanism \\cite{Savage1998}, to compare with our prediction results. PDR algorithms are seldom made open-sourced, especially a robust PDR used in different attachments, so we implement code ourselves according to \\cite{Brajdic2013} for step detection and \\cite{Xiao2014} for heading and step length estimation.\n\n\\subsection{Tests Involving Multiple Users and Devices}\n\t\n A series of experiments were conducted inside a large room with new users and phones to show our neural network's ability to generalize. Vicon system provides highly accurate reference to measure the location errors.\n\n The first group of tests include four participants, walking randomly for two minutes with the phone in different attachments, e.g. in hand, pocket and handbag respectively, covering everyday behaviors. Our training dataset doesn't contain data from three of these participants. The performance of our model is measured as error cumulative distribution function (CDF) against Vicon ground truth and compared with conventional PDR and SINS. Figure \\ref{fig:multi_users} illustrates that our proposed approach outperforms the competing methods in every attachment. If raw data is directly triply integrated by SINS, its error propagates exponentially. The maximum error of our IOnet stayed around 2 meter within 90\\% testing time, seeing 30\\%- 40\\% improvement compared with traditional PDR in Figure \\ref{fig:error_users}.\n\n\tAnother group of experiments is to test the performance across different devices, shown in Figure \\ref{fig:multi_phones}. We choose another two very common consumer phones, iPhone 6 and iPhone 5, whose IMU sensors, InvenSense MP67B and ST L3G4200DH, are quite distinct from our training device, iPhone 7 (IMU: InvenSense ICM-20600). Although intrinsic properties of IMU influence the quality of inertial measurements, our neural network shows good robustness.\n\n\\subsection{Large-scale Indoor Localization}\n\n\tHere, we apply our model on more challenging indoor localization experiment to present its performance in a new environment. Our model \\textit{without training outside} Vicon room, is directly applied to six large-scale experiments conducted on two floors of an office building. The new scenarios contained long straight lines and slopes, which were not contained in the training dataset. Lacking the high precise reference from Vicon, we take Google Tango Tablet \\cite{Tango}, a famous visual-inertial device, as pseudo ground truth.\n \n \\begin{figure}\n \t\\centering\n \\begin{subfigure}[t]{0.23\\textwidth}\n \t\\includegraphics[width=\\textwidth]{error_users.png}\n \t\\caption{\\label{fig:error_users} Multiple Users}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.23\\textwidth}\n \t\\includegraphics[width=\\textwidth]{error_phones.png}\n \t\\caption{\\label{fig:error_phones} Multiple Phones}\n \\end{subfigure}\n \\caption{\\label{fig:small_error}Maximum position error within 90\\% test time}\n \\end{figure}\n\n \t\\begin{figure}\n \t\\centering\n \\begin{subfigure}[t]{0.23\\textwidth}\n \t\\includegraphics[width=\\textwidth]{floor1_errnew.pdf}\n \t\\caption{\\label{fig:floor1_err} Floor A}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.23\\textwidth}\n \t\\includegraphics[width=\\textwidth]{floor4_errnew.pdf}\n \t\\caption{\\label{fig:floor4_err} Floor B}\n \\end{subfigure}\n \\caption{\\label{fig:large_error} Position error in large-scale indoor localization}\n \\end{figure}\n\n The floor maps are illustrated in Figure \\ref{fig:floor1} (about 1650 $m^2$ size) and Figure \\ref{fig:floor4} (about 2475 $m^2$). Participants walked normally along corridors with the phone in three attachments respectively. The predicted trajectories from our proposal are closer to Tango trajectories, compared with the two other approaches in Figure \\ref{fig:floor1} and \\ref{fig:floor4}. The continuously propagating error of SINS mechanism caused trajectory drifts that grow exponentially with time. Impacted by wrong step detection or inaccurate step stride and heading estimation, PDR accuracy is limited. We calculate absolute position error against pseudo ground truth from Tango at a distance of 50m, 100m and the end point in Figure \\ref{fig:large_error}. Our IONet shows competitive performance over traditional PDR and has the advantage of generating continuous trajectory at 10 Hz, though its heading attitude deviates from true values occasionally.\n\n\\subsection{Trolley Tracking}\n\t\n We consider a more general problem without periodic motion, which is hard for traditional step-based PDR or SINS on limited quality IMU. Tracking wheel-based motion, such as a shopping trolley\/cart, robot or baby-stroller is highly challenging and hence under-explored. Current approaches to track wheeled objects are mainly based on visual odometry or visual-inertial odometry (VIO) \\cite{Li2013b,Bloesch2015}. They won't work when the device is occluded or operating in low light environments, such as placed in a bag. Moreover, their high energy- and computation-consumption also constrain further application. Here, we apply our model on a trolley tracking problem \\textit{using only inertial sensors}. Due to a lack of comparable technique, our proposal is compared with the state-of-art visual-inertial odometry Tango.\n \n \\begin{figure}\n \t\\centering\n \\includegraphics[width=0.45\\textwidth]{trolley_cdf.pdf}\n \\caption{\\label{fig:trolley_cdf}CDF of Trolley Tracking}\n \\end{figure} \n \n Our experiment devices, namely an iPhone 7 and the Google Tango are attached on a trolley, pushed by a participant. Detailed experiment setup and results could be found in supplementary video \\footnote{Video is available at: https:\/\/youtu.be\/pr5tR6Wz-zs}. High-precision motion reference was provided by Vicon. From the trajectories from Vicon, our IONet and Tango in Figure \\ref{fig:traj_trolley}, our proposed approach shows almost the same accuracy as Tango, and even better robustness, because our pure inertial approach suffers less from environmental factors. With the help of visual features, VIO (Tango) can constrain error drift by fusing visual transformations and the inertial system, but it will collapse when capturing wrong features or no features, especially in open spaces. This happened in our experiment, shown in Figure \\ref{fig:trolley_cdf}. Although VIO can recover from the collapse, it still left a large distance error.\n\n\\section{Conclusion and Future Work}\n\nWe presented a new neural network framework to learn inertial odometry directly from IMU raw data. Our model is derived from Newtonian mechanics and formulated as a sequential learning problem using deep recurrent neural networks to recover motion characteristics. The performance of IONet is evaluated through extensive experiments including tests involving multiple users\/devices, large-scale indoor localization and trolley tracking. It outperforms both traditional step-based PDR and SINS mechanisms. We believe our work lays foundations for a more accurate and reliable indoor navigation system fusing multiple sources. \n\nA number of research challenges lie ahead: 1) The performance of our proposed IONet degraded when the input IMU measurements are corrupted with a high bias value, because the training and testing data are not in the same domains. 2) Challenging motions and distinct walking habits of new users can influence the effectiveness of our proposed approach. 3) Our current model is trained and tested on different attachments separately. The challenge becomes how to jointly train all the attachments on one model and improve its robustness. 4) Combining the sequence-based physical model with the proposed neural network model would be an interesting point to investigate in future.\n\nTherefore, our future work includes extending our work to learn transferable features without the limitations of measurement units, users and environments, realized on all platforms including mobile devices and robots.\n\n\\section{ Acknowledgments}\nWe would like to thank the anonymous reviewers for their valuable comments. We also thank our colleagues, Dr. Sen Wang, Dr. Stepheno Rosa, Shuyu Lin, Linhai Xie, Bo Yang, Jiarui Gan, Zhihua Wang, Ronald Clark, Zihang Lai, for their help, and Prof. Hongkai Wen at University of Warwick for useful discussions and the usage of GPU computational resources.\n\n\\bibliographystyle{aaai}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{section one}\nIn this paper, we are concerned with the bias voter model on regular\ntrees. First we introduce the definition of this process on general\ngraphs. The bias voter model $\\{\\eta_t\\}_{t\\geq 0}$ on a graph $S$\nis with state space $\\{0,1\\}^S$. That is to say, at each vertex\n$x\\in S$, there is a spin taking value from $\\{0,1\\}$. For any\nconfiguration $\\eta\\in \\{0,1\\}^S$ and $x\\in S$, we denote by\n$\\eta(x)$ the value of the spin at $x$.\n\nAt $t=0$, each spin takes a value from $\\{0,1\\}$ according to some\nprobability distribution. $\\lambda>\\theta>0$ are two constants. For\neach vertex $x$ in state $0$ (resp. $1$), it waits for an\nexponential time with rate $\\lambda$ (resp. $\\theta$) to choose a\nneighbor $y$ uniformly. Then, the value of the spin at $x$ flips to\nthat of the spin at $y$. Therefore, $\\{\\eta_t\\}_{t\\geq 0}$ is a spin\nsystem (see Chapter 3 of \\cite{LIG1985}) with flip rates function\ngiven by\n\\begin{equation}\\label{equ 1.1 flip rate}\nc(x,\\eta)=\n\\begin{cases}\n\\frac{\\lambda}{{\\rm deg}(x)}\\sum_{y:y\\sim x}\\eta(y)\n\\text{\\quad if~}\\eta(x)=0,\\\\\n\\frac{\\theta}{{\\rm deg}(x)}\\sum_{y:y\\sim x}[1-\\eta(y)] \\text{\\quad\nif~}\\eta(x)=1\n\\end{cases}\n\\end{equation}\nfor any $(x,\\eta)\\in S\\times \\{0,1\\}^S$, where we denote by $x\\sim\ny$ that $x$ and $y$ are neighbors and denote by ${\\rm deg}(x)$ the\ndegree of $x$.\n\nIntuitively, $0$ and $1$ are two opposite opinions of a topic.\nVertices in state $0$ (resp. $1$) are individuals holding the\nopinion $0$ (resp. $1$). Each individual waits for an exponential\ntime to choose a neighbor randomly and take the neighbor's opinion\nas its. The assumption that $\\lambda>\\theta$ can be considered as\nthat $0$ is a more controversial idea such that individuals holding\nit prefer to reconsider their opinions.\n\nIn this paper, we assume that the initial distribution of the\nprocess is the product measure with density $p$ for $p\\in (0,1)$,\nwhich is denoted by $\\mu_p$. In other words,\n\\[\n\\mu_p(\\eta:\\eta(x)=1,\\forall~x\\in A)=p^{|A|}\n\\]\nfor any $A\\subseteq S$.\n\nWe denote by $P_S^p$ the probability measure of the process\n$\\{\\eta_t\\}_{t\\geq 0}$ on $S$ with initial distribution $\\mu_p$. It\nis natural to consider the estimation of $P^p_S(\\eta_t(x)=1)$, which\nis the probability that $x$ takes $1$ at the moment $t$. When $S$ is\nthe complete graph with $N$ vertices, which we denote by $C_N$, it\nis easy to prove that $P^p_{C_N}(\\eta_t(x)=1)$ satisfies the\nfollowing limit theorem such that\n\\begin{equation}\\label{equ 1.2}\n\\lim_{N\\rightarrow+\\infty}P^p_{C_N}(\\eta_t(x)=1)=\\frac{pe^{(\\lambda-\\theta)t}}{1-p+pe^{(\\lambda-\\theta)t}}\n\\end{equation}\nfor any $t>0$.\n\nEquation \\eqref{equ 1.2} follows from a classic theory about density\ndependent population processes constructed by Ethier and Kurtz (see\nChapter 11 of \\cite{Ethier1986}). Let $N_t$ be the number of\nvertices in state $1$ at the moment $t$, then\n\\begin{equation}\\label{equ 1.3}\nN_t\\rightarrow\n\\begin{cases}\nN_t+1 \\text{~at rate~} \\frac{\\lambda}{N}(N-N_t)N_t, \\\\\nN_t-1 \\text{~at rate~} \\frac{\\theta}{N}(N-N_t)N_t.\n\\end{cases}\n\\end{equation}\n\nWhen the initial distribution of $\\{\\eta_t\\}_{t\\geq 0}$ is $\\mu_p$,\nthen according to Theorem 11.2.1 of \\cite{Ethier1986} and \\eqref{equ\n1.3}, $N_t\/N$ converges weakly to the solution $f(t,p)$ to the\nfollowing ODE\n\\[\n\\begin{cases}\n\\frac{d}{dt}f(t,p)=(\\lambda-\\theta)f(t,p)[1-f(t,p)],\\\\\nf(0,p)=p\n\\end{cases}\n\\]\nas $N$ grows to infinity. The mathematical expression of $f(t,p)$ is\nexactly the right side of \\eqref{equ 1.2}.\n\nIt is natural to ask whether \\eqref{equ 1.2} holds for other\nhomogeneous graph $S$. We manage to prove that the answer is\npositive when $S$ is a regular tree, which we denote by\n$\\mathbb{T}^N$. For mathematical details, see Section \\ref{section\ntwo}.\n\nSince\n\\[\\lim_{t\\rightarrow+\\infty}f(t,p)=1\\]\nwhen $\\lambda>\\theta$, it is natural to guess that $\\eta_t$\nconverges weakly to $\\delta_1$, the configuration where all vertices\nare in state $1$, as $t$ grows to infinity. For $S$ is a regular\ntree or a lattice, we can prove that this guess is correct when\n$\\lambda\/\\theta$ is sufficiently large. For mathematical details,\nsee Section \\ref{section two}.\n\nWhen $\\lambda=\\theta$, our model degenerates to the classic voter\nmodel introduced by Clifford and Sudbury in \\cite{Cli1973}. In\n\\cite{Holley1975}, Holley and Liggett give an important dual\nrelationship between the classic voter model and the coalescent\nrandom walks, which shows that any invariant measure of the classic\nvoter model is a convex combination of $\\delta_1$ and $\\delta_0$\nwhen and only when two independent simple random walks on $S$ will\nmeet with probability one. More details can be found in Section 3.4\nand Chapter 5 of \\cite{LIG1985}. The classic voter model is also a\nlinear system (see Chapter 9 of \\cite{LIG1985}), which makes the\nprocess has some good properties, such as $\\sum_{x\\in S}\\eta_t(x)$\nis a martingale. When $\\lambda>\\theta$, the bias voter model can not\nbe described via a linear system and has no good duality properties,\nwhich makes the classic approach to deal with voter models not be\nvalid.\n\n\\section{Main results}\\label{section two}\nIn this section we give the main results of this paper. We denote by\n$\\mathbb{T}^N$ the regular tree with degree $N+1$. We obtain that\n\\eqref{equ 1.2} holds for the bias voter model on $\\mathbb{T}^N$.\n\n\\begin{theorem}\\label{theorem 2.1 main}\nFor any $t>0$ and $p\\in (0,1)$,\n\\begin{equation}\\label{equ 2.1}\n\\lim_{N\\rightarrow+\\infty}P^p_{\\mathbb{T}^N}(\\eta_t(x)=1)=\\frac{pe^{(\\lambda-\\theta)t}}{1-p+pe^{(\\lambda-\\theta)t}}.\n\\end{equation}\n\\end{theorem}\n\nEquation \\eqref{equ 2.1} gives the limit of the probability that a\ngiven vertex is in state $1$ at the moment $t$ as the degree of the\ntree grows to infinity for any $t>0$. The limit function\n\\[\nf(t,p)=\\frac{pe^{(\\lambda-\\theta)t}}{1-p+pe^{(\\lambda-\\theta)t}}\n\\]\nis usually called the mean-field limit. Please note that\n$P_{\\mathbb{T}^N}^p(\\eta_t(x)=1)$ does not depend the choice of $x$\nsince $\\mu_p$ is a translation invariant measure on\n$\\{0,1\\}^{\\mathbb{T}^N}$ and the flip rate function of\n$\\{\\eta_t\\}_{t\\geq 0}$ given by \\eqref{equ 1.1 flip rate} is also\ntranslation invariant.\n\nThe proof of Theorem \\ref{theorem 2.1 main} is in Section\n\\ref{section 3}. The core step of the proof is to show that\n$\\eta_t(x)$ and $\\eta_t(y)$ are asymptotically independent for a\npair of neighbors $x$ and $y$ as the degree of the tree grows to\ninfinity.\n\nIt is natural to ask whether the counterpart of Theorem \\ref{theorem\n2.1 main} for the bias voter model on lattices $\\mathbb{Z}^d,\nd=1,2,\\ldots$ holds. We guess the answer is positive but we have not\nmanage to prove that.\n\nWe denote by $\\eta_t\\Rightarrow \\mu$ when the process\n$\\{\\eta_t\\}_{t\\geq 0}$ converges weakly to a probability measure\n$\\mu$. That is to say, $\\eta_t\\Rightarrow \\mu$ when and only when\n\\[\n\\lim_{t\\rightarrow+\\infty}Ef(\\eta_t)=\\int_{\\{0,1\\}^S}f(\\eta)~\\mu(d\\eta)\n\\]\nfor any $f\\in C(\\{0,1\\}^S)$. The mean-field limit $f(t,p)$ given by\n\\eqref{equ 2.1} satisfies that $f(t,p)\\rightarrow 1$ as\n$t\\rightarrow+\\infty$. So it is natural to guess that\n$P_{\\mathbb{T}^N}^p(\\eta_t(x)=1)\\rightarrow 1$ and therefore\n$\\eta_t\\Rightarrow \\delta_1$, the configuration where all the\nvertices are in state $1$. The following two theorems show that this\nguess holds for the bias voter model on trees and lattices when\n$\\lambda\/\\theta$ is sufficiently large.\n\n\\begin{theorem}\\label{theorem 2.2}\nFor each $N\\geq 2$, there is a constant $A(N)>0$ such that when\n$\\lambda\/\\theta>A(N)$, then\n\\[\n\\eta_t\\Rightarrow\\delta_1\n\\]\nfor the bias voter model $\\{\\eta_t\\}_{t\\geq 0}$ on $\\mathbb{T}^N$\nwith initial distribution $\\mu_p$ with $p\\in (0,1)$. The sequence\n$\\{A(N)\\}_{N\\geq 2}$ satisfies that\n\\begin{equation}\\label{equ 2.2}\n\\limsup_{N\\rightarrow+\\infty}\\frac{A(N)}{\\sqrt{N}}\\leq 1.\n\\end{equation}\n\\end{theorem}\n\nThe main approach to prove Theorem \\ref{theorem 2.2} is to compare\nthe bias voter model with a contact process on tree. The fact that\nthe strong survived contact process on tree satisfies the complete\nconvergence theorem is crucial for our proof. The limit theorem\n\\eqref{equ 2.2} of $A(N)$ follows from an important estimation of\nthe second critical value of the contact process on tree. For\nmathematical details, see Section \\ref{section 4}. For more about\nthe contact process on tree, see \\cite{Pem1992} and Part 1 of\n\\cite{LIG1999}.\n\nWe denote by $\\mathbb{Z}^d$ the lattice with degree $2d$. The\nfollowing theorem is a counterpart of Theorem \\ref{theorem 2.2} for\nthe bias voter model on $\\mathbb{Z}^d$.\n\n\\begin{theorem}\\label{theorem 2.3}\nFor each $d\\geq 1$ and the bias voter model $\\{\\eta_t\\}_{t\\geq 0}$\non $\\mathbb{Z}^d$ with initial distribution $\\mu_p$ with $p\\in\n(0,1)$, when $\\lambda\/\\theta>4$, then\n\\[\n\\eta_t\\Rightarrow\\delta_1.\n\\]\n\\end{theorem}\nThe proof of Theorem \\ref{theorem 2.3} is nearly the same analysis\nas that of Theorem \\ref{theorem 2.2}. The assumption\n$\\lambda\/\\theta>4$ relies on the fact that the critical value for\nthe contact process on $\\mathbb{Z}^d$ is at most $2\/d$. For more\ndetails, see Section \\ref{section 4}.\n\n\\section{Mean-field limit}\\label{section 3}\nIn this section, we will prove Theorem \\ref{theorem 2.1 main}. For\nany $t>0$ and $p\\in (0,1)$, we define\n\\[\nf(t,p)=\\frac{pe^{(\\lambda-\\theta)t}}{1-p+pe^{(\\lambda-\\theta)t}}.\n\\]\nSince we are focused on the case where $S=\\mathbb{T}^N$ in this\nsection, we rewrite $P^p_{\\mathbb{T}^N}$ as $P^p_N$. First it is\neasy to show that $f(t,p)$ is an upper bound of $P^p_N(\\eta_t(x)=1)$\nfor any $t\\geq 0$ and $N\\geq 1$.\n\n\\begin{lemma}\\label{lemma 3.1}\nFor any $t\\geq 0$ and $N\\geq 1$,\n\\begin{equation}\\label{equ 3.1}\nP_N^p(\\eta_t(x)=1)\\leq f(t,p).\n\\end{equation}\n\\end{lemma}\n\n\\proof\n\nAccording to the flip rate function $c(x,\\eta)$ of\n$\\{\\eta_t\\}_{t\\geq 0}$ given by \\eqref{equ 1.1 flip rate} and\nHille-Yosida Theorem,\n\\begin{align}\\label{equ 3.2}\n\\frac{d}{dt}P^p_N(\\eta_t(x)=1)=&\\frac{\\lambda}{N+1}\\sum_{y:y\\sim\nx}P_N^p(\\eta_t(x)=0,\\eta_t(y)=1)\\\\\n&-\\frac{\\theta}{N+1}\\sum_{y:y\\sim\nx}P_N^p(\\eta_t(x)=1,\\eta_t(y)=0)\\notag\n\\end{align}\nfor any $t>0$.\n\nSince $\\mu_p$ and $c(x,\\eta)$ are translation invariant,\n\\[\nP_N^p(\\eta_t(x)=0,\\eta_t(y)=1)=P_N^p(\\eta_t(x)=1,\\eta_t(y)=0)\n\\]\nand does not rely on the choose of the neighbor $y$.\n\nTherefore,\n\\begin{equation}\\label{equ 3.3}\n\\frac{d}{dt}P^p_N(\\eta_t(x)=1)=(\\lambda-\\theta)P_N^p(\\eta_t(x)=1,\\eta_t(y)=0),\n\\end{equation}\nwhere $y$ is a fixed neighbor of $x$.\n\nIt is easy to check that the bias voter model is an attractive spin\nsystem (see Section 3.2 of \\cite{LIG1985}). Therefore, the two\nevents $\\{\\eta_t(x)=1\\}$ and $\\{\\eta_t(y)=0\\}$ are negative\ncorrelated when the initial distribution is $\\mu_p$ according to\nTheorem 2.2.14 of \\cite{LIG1985}.\n\nAs a result,\n\\begin{align*}\nP_N^p(\\eta_t(x)=1,\\eta_t(y)=0)&\\leq\nP_N^p(\\eta_t(x)=1)P_N^p(\\eta_t(y)=0)\\\\\n&=P_N^p(\\eta_t(x)=1)[1-P_N^p(\\eta_t(x)=1)]\n\\end{align*}\nand hence\n\\begin{equation}\\label{equ 3.4}\n\\frac{d}{dt}\\Big[\\log\n\\frac{P_N^p(\\eta_t(x)=1)}{1-P_N^p(\\eta_t(x)=1)}\\Big]\\leq\n(\\lambda-\\theta)\n\\end{equation}\nby \\eqref{equ 3.3}.\n\nAccording to \\eqref{equ 3.4},\n\\begin{equation}\\label{equ 3.5}\n\\log \\frac{P_N^p(\\eta_t(x)=1)}{1-P_N^p(\\eta_t(x)=1)}-\\log\n\\frac{p}{1-p}\\leq (\\lambda-\\theta)t\n\\end{equation}\nfor any $t>0$.\n\nEquation \\eqref{equ 3.1} follows from \\eqref{equ 3.5} directly.\n\n\\qed\n\nTo give a lower bound of $P_N^p(\\eta_t(x)=1)$, we give another\ndescription of the bias voter model $\\{\\eta_t\\}_{t\\geq 0}$. We are\ninspired by the approach of graphical representation introduce by\nHarris in \\cite{Har1978} and the construction of stochastic\nprocesses of spin systems with exchange dynamics introduced by\nDurrett and Neuhauser in \\cite{Durrett1994}. For any $x,y\\in\n\\mathbb{T}^N, x\\sim y$, we assume that $\\{N_{(x,y)}(t):t\\geq 0\\}$ is\na Poisson process with rate $(\\lambda+\\theta)\/(N+1)$. Please note\nthat we care the order of $x$ and $y$, so $N_{(x,y)}\\neq N_{(y,x)}$.\nWe assume that all these Poisson processes are independent. At\n$t=0$, each spin takes a value from $\\{0,1\\}$ according to the\ndistribution $\\mu_p$. Then, the spin at $x$ may change its value\nonly at event times of $N_{(x,y)},y\\sim x$. For any $t>0$, we define\n\\[\n\\eta_{t-}(x)=\\lim_{s\\uparrow t,s0$, we define\n\\[\nA_{x,y}(T)=\\{N_{(x,y)}(T)=N_{(y,x)}(T)=0\\}\n\\]\nas the random event that the first event time of $N_{(x,y)}$ and\n$N_{(y,x)}$ does not come before $T$.\n\nThen,\n\\[\nP_N^p(A_{x,y}(T))=e^{-\\frac{2(\\lambda+\\theta)}{N+1}T}\n\\]\nand hence\n\\begin{equation}\\label{equ 3.6}\n\\lim_{N\\rightarrow+\\infty}P_N^p(A_{x,y}(T))=1\n\\end{equation}\nfor any $T>0$ and $p\\in (0,1)$.\n\nAfter all the prepared work, we can give the proof of Theorem\n\\ref{theorem 2.1 main} now.\n\n\\proof[Proof of Theorem \\ref{theorem 2.1 main}]\n\nAccording to \\eqref{equ 3.3}, $P^p_N(\\eta_t(x)=1)$ is increasing\nwith $t$, therefore by Lemma \\ref{lemma 3.1},\n\\begin{equation}\\label{equ 3.7}\np\\leq P^p_N(\\eta_t(x)=1)\\leq f(T,p)<1\n\\end{equation}\nfor any $t\\in [0,T]$ and $N\\geq 1$.\n\nFor any $t\\in [0,T]$, $x\\sim y\\in \\mathbb{T}^N$.\n\\begin{align}\\label{equ 3.8}\nP_N^p(\\eta_t(x)=1,\\eta_t(y)=0)&\\geq P_N^p\\Big(\\eta_t(x)=1,\\eta_t(y)=0,A_{x,y}(T)\\Big)\\\\\n&=P_N^p\\Big(\\eta_t(x)=1,\\eta_t(y)=0\\Big|A_{x,y}(T)\\Big)P_N^p\\Big(A_{x,y}(T)\\Big).\\notag\n\\end{align}\n\nFor $x\\sim y\\in \\mathbb{T}^N$, let\n\\[\nC_y(x)=\\{z\\in \\mathbb{T}^N: \\text{~there is a path avoiding $y$ from\n$x$ to $z$}\\}.\n\\]\nConditioned on $A_{x,y}(T)$, $\\{\\eta_t(z):z\\in C_y(x)\\}_{t\\leq T}$\nand $\\{\\eta_t(w):w\\in C_x(y)\\}_{t\\leq T}$ are independent when the\ninitial distribution is $\\mu_p$, since vertices in $C_x(y)$ can not\nexchange opinions with vertices in $C_y(x)$ before the moment $T$.\n\nAs a result,\n\\begin{align}\\label{equ 3.9}\n&P_N^p\\Big(\\eta_t(x)=1,\n\\eta_t(y)=0\\Big|A_{x,y}(T)\\Big)\\notag\\\\\n&=P_N^p\\Big(\\eta_t(x)=1\\Big|A_{x,y}(T)\\Big)P_N^p\\Big(\\eta_t(y)=0\\Big|A_{x,y}(T)\\Big)\\\\\n&\\geq\n\\frac{\\Big[P_N^p\\big(\\eta_t(x)=1\\big)-P_N^p\\big(A^c_{x,y}(T)\\big)\\Big]\n\\Big[P_N^p\\big(\\eta_t(y)=0\\big)-P_N^p\\big(A^c_{x,y}(T)\\big)\\Big]}{\\Big[P^p_N\\big(A_{x,y}(T)\\big)\\Big]^2}\\notag\n\\end{align}\nfor ant $t\\in [0,T]$, where $A^c_{x,y}(T)$ is the complementary set\nof $A_{x,y}(T)$.\n\nBy \\eqref{equ 3.3}, \\eqref{equ 3.8} and \\eqref{equ 3.9}, for $t\\in\n[0,T]$,\n\\begin{equation}\\label{equ 3.10}\n\\frac{d}{dt}P_N^p(\\eta_t(x)=1)\\geq(\\lambda-\\theta)P_N^p(\\eta_t(x)=1)\\big[1-P_N^p(\\eta_t(x)=1)\\big]G_t^p(x,y,N),\n\\end{equation}\nwhere\n\\begin{align}\\label{equ 3.11}\nG_t^p(x,y,N)&=\\frac{\\Big[1-\\frac{P_N^p\\big(A^c_{x,y}(T)\\big)}{P_N^p\\big(\\eta_t(x)=1\\big)}\\Big]\\Big[1-\\frac{P_N^p\\big(A^c_{x,y}(T)\\big)}\n{P_N^p\\big(\\eta_t(x)=0\\big)}\\Big]}{P_N^p\\big(A_{x,y}(T)\\big)}\\notag\\\\\n&\\geq\n\\frac{\\Big[1-\\frac{P_N^p\\big(A^c_{x,y}(T)\\big)}{p}\\Big]\\Big[1-\\frac{P_N^p\\big(A^c_{x,y}(T)\\big)}\n{1-f(T,p)}\\Big]}{P_N^p\\big(A_{x,y}(T)\\big)}\n\\end{align}\naccording to \\eqref{equ 3.7}.\n\nBy \\eqref{equ 3.6} and \\eqref{equ 3.11}, for any $\\epsilon>0$ and\n$T>0$, there exists $N(\\epsilon,T)>0$ such that\n\\begin{equation}\\label{equ 3.12}\nG_t^p(x,y,N)\\geq 1-\\epsilon\n\\end{equation}\nfor any $N\\geq N(\\epsilon,T)$ and $t\\in [0,T]$.\n\nBy \\eqref{equ 3.10} and \\eqref{equ 3.12},\n\\begin{equation}\\label{equ 3.13}\n\\frac{d}{dt}P_N^p(\\eta_t(x)=1)\\geq\n(\\lambda-\\theta)(1-\\epsilon)P_N^p(\\eta_t(x)=1)\\big[1-P_N^p(\\eta_t(x)=1)\\big]\n\\end{equation}\nfor $N\\geq N(\\epsilon,T)$ and $t\\in [0,T]$.\n\nBy \\eqref{equ 3.13},\n\\[\n\\frac{d}{dt}\\Big[\\log\n\\frac{P_N^p(\\eta_t(x)=1)}{1-P_N^p(\\eta_t(x)=1)}\\Big]\\geq\n(\\lambda-\\theta)(1-\\epsilon)\n\\]\nand hence\n\\begin{equation}\\label{equ 3.14}\nP_N^p(\\eta_t(x)=1)\\geq\n\\frac{pe^{(\\lambda-\\theta)(1-\\epsilon)t}}{1-p+pe^{(\\lambda-\\theta)(1-\\epsilon)t}}\n\\end{equation}\nfor $N\\geq N(\\epsilon, T)$ and $t\\in [0,T]$.\n\nBy \\eqref{equ 3.14},\n\\begin{equation}\\label{equ 3.15}\n\\liminf_{N\\rightarrow+\\infty}P_N^p(\\eta_t(x)=1)\\geq\n\\frac{pe^{(\\lambda-\\theta)(1-\\epsilon)t}}{1-p+pe^{(\\lambda-\\theta)(1-\\epsilon)t}}\n\\end{equation}\nfor any $t>0$ and $\\epsilon>0$.\n\nTheorem \\ref{theorem 2.1 main} follows from \\eqref{equ 3.1} and\n\\eqref{equ 3.15} directly.\n\n\\qed\n\nIn the proof above, we show that $\\eta_t(x)$ and $\\eta_t(y)$ are\nasymptotically independent as $N$ grows to infinity, since\n$\\eta_t(x)$ and $\\eta_t(y)$ are independent conditioned on\n$A_{x,y}(t)$ and the probability of $A_{x,y}(t)$ converges to $1$ as\n$N$ grows to infinity. If we could show that $\\eta_t(x)$ and\n$\\eta_t(y)$ are asymptotically independent for $x\\sim y, x,y\\in\n\\mathbb{Z}^d$ as $d$ grows to infinity, then we could extend Theorem\n\\ref{theorem 2.1 main} to the case where the bias voter model is on\nthe lattice. We will work on this problem as a further study.\n\n\\section{Weak convergence}\\label{section 4}\nIn this section we will give the proofs of Theorem \\ref{theorem 2.2}\nand Theorem \\ref{theorem 2.3}. After a scaling of the time, it is\neasy to see that the limit behavior of $\\{\\eta_t\\}_{t\\geq 0}$ only\ndepends on $\\lambda\/\\theta$, so in this section we assume that\n$\\theta=1$.\n\nThe proofs of Theorem \\ref{theorem 2.2} and \\ref{theorem 2.3} are\nvery similar, so we only give details of the proof of Theorem\n\\ref{theorem 2.2}. For Theorem \\ref{theorem 2.3}, we only give a\nsketch of the proof.\n\nFirst we introduce the definition of the contact process\n$\\{\\zeta_t\\}_{t\\geq 0}$ on $\\mathbb{T}^N$. $\\{\\zeta_t\\}_{t\\geq 0}$\nis a spin system with state space $\\{0,1\\}^{\\mathbb{T}^N}$ and flip\nrate function given by\n\\begin{equation}\\label{equ 4.1 rate function of contact process}\nc_1(x,\\zeta)=\n\\begin{cases}\n1 \\text{\\quad if~} \\zeta(x)=1,\\\\\n\\frac{\\lambda}{N+1}\\sum_{y:y\\sim x}\\zeta(y) \\text{\\quad if~}\n\\zeta(x)=0\n\\end{cases}\n\\end{equation}\nfor any $(x,\\zeta)\\in \\{0,1\\}^{\\mathbb{T}^N}$.\n\nThe contact process is first introduced by Harris in \\cite{Har1974}.\nChapter 6 of \\cite{LIG1985} and Part 1 of \\cite{LIG1999} give a\ndetailed summary of main properties of the contact process.\nIntuitively, the contact process describes the spread of an\ninfection disease. Vertices in state $1$ are infected while vertices\nin state $0$ are healthy. An infected vertex waits for an\nexponential time with rate $1$ to become healthy and a healthy\nvertex is infected at a rate proportional to the number of infected\nneighbors.\n\nAccording to the basic coupling of spin systems (see Section 3.1 of\n\\cite{LIG1985}), we can also use $P_N^p$ to denote the probability\nmeasure of the contact process $\\{\\zeta_t\\}_{t\\geq 0}$ on\n$\\mathbb{T}^N$ with initial distribution $\\mu_p$. We write $P_N^p$\nas $P_{N,\\lambda}^p$ when we need to distinguish $\\lambda$.\n\nThe following lemma shows that we can control the evolution of the\nbias voter model $\\{\\eta_t\\}_{t\\geq 0}$ from below by the contact\nprocess $\\{\\zeta_t\\}_{t\\geq 0}$, which is crucial for us to prove\nTheorem \\ref{theorem 2.2}.\n\n\\begin{lemma}\\label{lemma 4.1}\nAssume that $\\{\\eta_t\\}_{t\\geq 0}$ is the bias voter model with flip\nrate function $c(x,\\eta)$ given by \\eqref{equ 1.1 flip rate} with\n$\\theta=1$ and $\\{\\zeta_t\\}_{t\\geq 0}$ is the contact process with\nflip rate function $c_1(x,\\zeta)$ given by \\eqref{equ 4.1 rate\nfunction of contact process}, then\n\\begin{equation}\\label{equ 4.2}\nP_{N,\\lambda}^p(\\eta_t(x)=0,\\forall~x\\in A)\\leq\nP_{N,\\lambda}^p(\\zeta_t(x)=0,\\forall~x\\in A)\n\\end{equation}\nfor any $A\\subseteq \\mathbb{T}^N$ and any $t\\geq 0$.\n\\end{lemma}\n\n\\proof\n\nFor any $\\eta,\\zeta\\in \\{0,1\\}^{\\mathbb{T}^N}$, we write $\\eta\\geq\n\\zeta$ when and only when $\\eta(x)\\geq \\zeta(x)$ for any $x\\in\n\\mathbb{T}^N$.\n\nBy direct calculation, it is easy to check that\n\\begin{equation}\\label{equ 4.3}\n\\begin{cases}\nc(x,\\eta)\\geq c_1(x,\\zeta) \\text{\\quad if~} \\eta(x)=\\zeta(x)=0,\\\\\nc(x,\\eta)\\leq c_1(x,\\zeta) \\text{\\quad if~} \\eta(x)=\\zeta(x)=1\n\\end{cases}\n\\end{equation}\nfor any $\\eta\\geq \\zeta$.\n\nBy \\eqref{equ 4.3} and Theorem 3.1.5 of \\cite{LIG1985},\n\\begin{equation}\\label{equ 4.4}\n\\eta_t\\geq \\zeta_t\n\\end{equation}\nfor any $t>0$ in the sense of coupling when $\\eta_0$ and $\\zeta_0$\nhave the same distribution $\\mu_p$.\n\nEquation \\eqref{equ 4.2} follows from \\eqref{equ 4.4} directly.\n\n\\qed\n\nNow we introduce the second critical value of the contact process on\ntree, $\\lambda$ above which makes the complete convergence theorem\nhold.\n\nThe contact process $\\{\\zeta_t\\}_{t\\geq 0}$ is an attractive spin\nsystem (see Section 3.2 of \\cite{LIG1985}), therefore\n\\[\nP_{N,\\lambda_1}^1(\\exists~ t_n \\uparrow +\\infty,\n\\eta_{t_n}(x)=1,\\forall~n\\geq 1)\\geq P_{N,\\lambda_2}^1(\\exists~ t_n\n\\uparrow +\\infty, \\eta_{t_n}(x)=1,\\forall~n\\geq 1)\n\\]\nfor $\\lambda_1>\\lambda_2$. As a result, it is reasonable to define\nthe following critical value for each $N\\geq 2$,\n\\begin{equation}\\label{equ 4.5}\nA(N)=\\sup\\{\\lambda: P_{N,\\lambda}^1(\\exists~ t_n \\uparrow +\\infty,\n\\eta_{t_n}(x)=1,\\forall~n\\geq 1)=0\\}.\n\\end{equation}\n\n$A(N)$ is called the second critical value of the contact process on\n$\\mathbb{T}^N$. When $\\lambda>A(N)$, the contact process is called\nstrong survived. For more details, see Section 1.4 of\n\\cite{LIG1985}.\n\nAccording to Theorem 1.4.65 of \\cite{LIG1985},\n\\[\n\\limsup_{N\\rightarrow+\\infty}\\sqrt{N}\\frac{A(N)}{N+1}\\leq 1,\n\\]\nwhich is exactly equation \\eqref{equ 2.2}.\n\nThe following lemma is a corollary of the complete convergence\ntheorem of strong survived contact process on tree. Please note that\nwe denote by $\\delta_0$ the configuration in\n$\\{0,1\\}^{\\mathbb{T}^N}$ where all the vertices are in state $0$.\n\n\\begin{lemma}\\label{lemma 4.2}\nWhen $\\lambda>A(N)$, then there is a probability measure\n$\\nu_\\lambda$ on $\\{0,1\\}^{\\mathbb{T}^N}$ such that\n\\begin{equation}\\label{equ 4.6}\n\\nu_\\lambda(\\zeta:\\zeta=\\delta_0)=0\n\\end{equation}\nand\n\\begin{equation}\\label{equ 4.7}\n\\zeta_t\\Rightarrow \\nu_\\lambda\n\\end{equation}\nwhen $\\zeta_0$ has probability distribution $\\mu_p$ with $p\\in\n(0,1)$.\n\\end{lemma}\n\n\\proof\n\nWe denote by $\\zeta_t^1$ the contact process with\n$\\zeta_0=\\delta_1$. According to Theorem 3.2.3 and Theorem 6.1.6 of\n\\cite{LIG1985}, when $\\lambda>A(N)$, there exists probability\nmeasure $\\nu_\\lambda$ such that\n\\[\n\\zeta_t^1\\Rightarrow \\nu_\\lambda\n\\]\nand\n\\[\n\\nu_\\lambda(\\zeta:\\zeta=\\delta_0)=0.\n\\]\n\nLet\n\\[\n\\tau=\\inf\\{t:\\zeta_t=\\delta_0\\},\n\\]\nthen Theorem 1 of \\cite{Zhang1996} shows that for any probability\nmeasure $\\mu$ on $\\{0,1\\}^{\\mathbb{T}^N}$ and $\\{\\zeta_t\\}_{t\\geq\n0}$ with initial distribution $\\mu$,\n\\begin{equation}\\label{equ 4.8}\n\\zeta_t\\Rightarrow\nP_\\mu(\\tau<+\\infty)\\delta_0+P_\\mu(\\tau=+\\infty)\\nu_\\lambda\n\\end{equation}\nwhen $\\lambda>A(N)$.\n\nWhen $\\mu=\\mu_p$ for $p\\in (0,1)$, there are infinite many vertices\nin state $1$ at $t=0$ with probbaility one and hence\n\\begin{equation}\\label{equ 4.9}\nP_N^p(\\tau<+\\infty)=0.\n\\end{equation}\n\nLemma \\ref{lemma 4.2} follows from \\eqref{equ 4.8} and \\eqref{equ\n4.9} directly.\n\n\\qed\n\nEquation with form as \\eqref{equ 4.8} is called the complete\nconvergence theorem, which shows that the process with any initial\ndistribution converges weakly to a convex combination of invariant\nmeasures. In \\cite{Bez1990}, Bezuidenhout and Grimmett show that the\ncomplete convergence theorem holds for the contact process on\n$\\mathbb{Z}^d$. References \\cite{Zhang1996} authored by Zhang and\n\\cite{Sal1998} authored by Salzano and Schonmann give two different\nproofs of the complete convergence theorem of the strong survived\ncontact process on trees. In \\cite{ChenX2009} and \\cite{Yao2012},\nChen and Yao show that the complete convergence theorem holds for\ncontact process in a random environment on $Z^+\\times Z^d$. In\n\\cite{Handjani1999}, Handjani shows that the complete convergence\ntheorem holds for the threshold-one voter model on $\\mathbb{Z}^d$\nsuch that the process with any initial distribution converges weakly\nto a convex combination of three invariant measures.\n\nBy \\eqref{equ 3.3}, $P_N^p(\\eta_t(x)=1)$ is increasing with $t$, so\nit is reasonable to define\n\\[\nh(N,p)=\\lim_{t\\rightarrow+\\infty}P_N^p(\\eta_t(x)=1)\n\\]\nfor each $N\\geq 1$ and $p\\in (0,1)$. It is easy to see that\n$\\{\\eta_t\\}_{t\\geq 0}$ with initial distribution $\\mu_p$ converges\nweakly to $\\delta_1$ when and only when $h(N,p)=1$. The following\nlemma shows that there is a subsequence of $\\{\\eta_t\\}_{t\\geq 0}$\nconverges weakly to a convex combination of $\\delta_1$ and\n$\\delta_0$.\n\n\\begin{lemma}\\label{lemma 4.3}\nFor $\\{\\eta_t\\}_{t\\geq 0}$ with initial distribution $\\mu_p$ on\n$\\mathbb{T}^N$, there is a sequence $\\{t_n\\}_{n\\geq 1}$ increasing\nto infinity such that\n\\[\n\\eta_{t_n}\\Rightarrow h(N,p)\\delta_1+[1-h(N,p)]\\delta_0.\n\\]\n\\end{lemma}\n\n\\proof\n\nBy \\eqref{equ 3.3}, it is easy to see that\n\\[\n\\liminf_{t\\rightarrow+\\infty}P_N^p(\\eta_t(x)=1,\\eta_t(y)=0)=0\n\\]\nfor $x\\sim y$. Otherwise, there would be $\\alpha>0$ such that\n$P_N^p(\\eta_t(x)=1,\\eta_t(y)=0)\\geq \\alpha$ for any $t>T_0$, where\n$T_0$ is a sufficiently large number. Then, by \\eqref{equ 3.3},\n\\[\nP_N^p(\\eta_t(x)=1)-P_N^p(\\eta_{T_0}(x)=1)\\geq\n\\alpha(t-T_0)\\rightarrow +\\infty\n\\]\nas $t$ grows to infinity, which is contradictory.\n\nTherefore, there exists sequence $\\{t_n\\}_{n\\geq 1}$ increasing to\ninfinity such that\n\\begin{equation}\\label{equ 4.10}\n\\lim_{n\\rightarrow+\\infty}P_N^p(\\eta_{t_n}(x)=1,\\eta_{t_n}(y)=0)=0\n\\end{equation}\nfor $x\\sim y$.\n\nSince $\\{0,1\\}^{\\mathbb{T}^N}$ is a compact space, there is a\nsubsequence of $\\{\\eta_{t_n}\\}_{n\\geq 1}$ that converges weakly to a\nprobability measure on $\\{0,1\\}^{\\mathbb{T}^N}$ according to the\nHelly's selection theorem (see Theorem 3.2.6 of \\cite{Dur2010}).\nWithout loss of generality, we can assume that\n$\\{\\eta_{t_n}\\}_{n\\geq 1}$ is a convergent sequence itself.\n\nWe denote by $\\varphi$ the limit distribution of $\\eta_{t_n}$ as $n$\ngrows to infinity. Then, according to \\eqref{equ 4.10},\n\\begin{equation}\\label{equ 4.11}\n\\varphi(\\eta(x)=1,\\eta(y)=0)=0\n\\end{equation}\nfor any $x\\sim y$.\n\nBy \\eqref{equ 4.11},\n\\begin{align*}\n\\varphi(\\eta:\\eta\\neq \\delta_0,\\delta_1)&=\\varphi(\\exists~x\\sim y,\\eta(x)\\neq \\eta(y))\\\\\n&\\leq \\sum_{x\\sim y}\\varphi(\\eta(x)=1,\\eta(y)=0)+\\sum_{x\\sim\ny}\\varphi(\\eta(x)=0,\\eta(y)=1)=0.\n\\end{align*}\n\nAs a result, $\\varphi$ is a convex combination of $\\delta_1$ and\n$\\delta_0$. Since\n\\[\n\\varphi(\\eta(x)=1)=\\lim_{n\\rightarrow+\\infty}P_N^p(\\eta_{t_n}(x)=1)=h(N,p)\n\\]\naccording to definition of $h$,\n\\[\n\\varphi=h(N,p)\\delta_1+[1-h(N,p)]\\delta_0.\n\\]\n\n\\qed\n\nFinally we can give the proof of Theorem \\ref{theorem 2.2}.\n\n\\proof[Proof of Theorem \\ref{theorem 2.2}]\n\nWe only need to show that $h(N,p)=1$ when $\\lambda>A(N)$. When\n$\\lambda>A(N)$, for any $\\epsilon>0$, by \\eqref{equ 4.6} in Lemma\n\\ref{lemma 4.2}, there exists finite subset $D$ of $\\mathbb{T}^N$\nsuch that\n\\begin{equation}\\label{equ 4.12}\n\\nu_\\lambda(\\zeta(x)=0,\\forall~x\\in D)\\leq \\epsilon.\n\\end{equation}\n\nBy Lemma \\ref{lemma 4.3}, there exists sequence $\\{t_n\\}_{n\\geq 1}$\nincreasing to infinity such that\n\\begin{equation}\\label{equ 4.13}\n\\lim_{n\\rightarrow+\\infty}P_{N,\\lambda}^p(\\eta_{t_n}(x)=0,\\forall~x\\in\nD)=1-h(N,p).\n\\end{equation}\n\nBy Lemma \\ref{lemma 4.2} and \\eqref{equ 4.12},\n\\begin{equation}\\label{equ 4.14}\n\\lim_{n\\rightarrow+\\infty}P_{N,\\lambda}^p(\\zeta_{t_n}(x)=0,\\forall~x\\in\nD)=\\nu_\\lambda(\\zeta(x)=0,\\forall~x\\in D)\\leq \\epsilon.\n\\end{equation}\n\nBy \\eqref{equ 4.2}, \\eqref{equ 4.13} and \\eqref{equ 4.14},\n\\[\n1-h(N,p)\\leq \\epsilon\n\\]\nfor any $\\epsilon>0$.\n\nAs a result,\n\\[\nh(N,p)=1\n\\]\nwhen $\\lambda>A(N)$ and the proof is complete.\n\n\\qed\n\nAt the end of this section, we give a sketch of the proof of Theorem\n\\ref{theorem 2.3}.\n\n\\proof[Proof of Theorem \\ref{theorem 2.3}]\n\nLet $\\{\\xi_t\\}_{t\\geq 0}$ be contact process on $\\mathbb{Z}^d$ with\nflip rate function given by\n\\[\nc_2(x,\\xi)=\n\\begin{cases}\n1 \\text{\\quad if~}\\xi(x)=1,\\\\\n\\frac{\\lambda}{2d}\\sum_{y:y\\sim x}\\xi(y)\\text{\\quad if~}\\xi(x)=0\n\\end{cases}\n\\]\nfor any $(x,\\xi)\\in \\mathbb{Z}^d\\times \\{0,1\\}^{\\mathbb{Z}^d}$\n\nLet $\\lambda(d)$ be the first critical value of $\\{\\xi_t\\}_{t\\geq\n0}$, that is to say,\n\\[\n\\lambda(d)=\\sup\\{\\lambda:\\lim_{t\\rightarrow+\\infty}P_{\\mathbb{Z}^d,\\lambda}^1(\\xi_t(x)=1)=0\\}.\n\\]\nIt is shown in \\cite{Bez1990} that the complete convergence theorem\nholds for $\\{\\xi_t\\}_{t\\geq 0}$ when $\\lambda>\\lambda(d)$. Then,\naccording to a similar analysis with that in the proof of Theorem\n\\ref{theorem 2.2},\n\\[\n\\eta_t\\Rightarrow \\delta_1\n\\]\nfor the bias voter model $\\{\\eta_t\\}_{t\\geq 0}$ on $\\mathbb{Z}^d$\nwith initial distribution $\\mu_p$ with $p\\in (0,1)$ when\n$\\lambda>\\lambda(d)$.\n\nAccording to Corollary 6.4.4 of \\cite{LIG1985},\n\\[\n\\frac{\\lambda(d)}{2d}\\leq \\frac{2}{d}\n\\]\nand hence\n\\[\n\\lambda(d)\\leq 4.\n\\]\nTherefore, when $\\lambda>4$, the bias voter model on $\\mathbb{Z}^d$\nwith initial distribution $\\mu_p$ with $p\\in (0,1)$ converges weakly\nto $\\delta_1$.\n\n\\qed\n\n\\section{Two conjectures}\\label{section 5}\nIn this section we propose two conjectures. The first one is about\nthe mean field limit of the bias voter model on lattices.\n\n\\begin{conjecture}\\label{conjecture 5.1}\nFor $p\\in (0,1)$,\n\\[\n\\lim_{d\\rightarrow+\\infty}P_{\\mathbb{Z}^d}^p(\\eta_t(x)=1)=\\frac{pe^{(\\lambda-\\theta)t}}{1-p+pe^{(\\lambda-\\theta)t}}\n\\]\nfor any $t>0$.\n\\end{conjecture}\nAs we introduced in Section \\ref{section 3}, the main difficulty to\nprove Conjecture \\ref{conjecture 5.1} is to show that $\\eta_t(x)$\nand $\\eta_t(y)$ are asymptotically independent for $x\\sim y, x,y\\in\n\\mathbb{Z}^d$ as $d$ grows to infinity. Since there are infinite\nmany paths on the lattice from $x$ to $y$ avoiding the edge\nconnecting $x$ and $y$, our proof of Theorem \\ref{theorem 2.1 main}\nis not applicable for the case where the process is on the lattice.\n\nThe second conjecture is about the weak convergence of the process.\nWe guess that Theorem \\ref{theorem 2.2} and Theorem \\ref{theorem\n2.3} hold under a generalized condition.\n\n\\begin{conjecture}\\label{conjecture 5.2}\nFor any $\\lambda>\\theta$, $S=\\mathbb{T}^d$ or $\\mathbb{Z}^d$ with\n$d\\geq 1$,\n\\[\n\\eta_t\\Rightarrow \\delta_1\n\\]\nfor $\\{\\eta_t\\}_{t\\geq 0}$ on $S$ with initial distribution $\\mu_p$\nwith $p\\in (0,1)$.\n\\end{conjecture}\n\nAccording to the proof of Theorem \\ref{theorem 2.2}, the core step\nto prove Conjecture \\ref{conjecture 5.2} is to verify a claim that\nthe limit distribution of any convergent subsequence of\n$\\{\\eta_t\\}_{t\\geq 0}$ puts no mass on $\\delta_0$. However, for\n$\\lambda$ not large enough for the complete convergence theorem of\nthe contact process to hold, we have not find a way to prove this\nclaim yet.\n\nWe will work on this two conjectures as a further study and hope to\ndiscuss with readers who are interested in them.\n\n\\quad\n\n\\textbf{Acknowledgments.} We are grateful to the financial support\nfrom the National Natural Science Foundation of China with grant\nnumber 11171342 and China Postdoctoral Science Foundation (No.\n2015M571095).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{section:introduction}\n\nThe (reduced) \\emph{Burau representation} \n$$\\rho_n:\\ \\ B_n \\longrightarrow \\text{GL}\\left(n-1,\\mathbb Z[q^{\\pm 1}]\\right)$$ \nwas the first possible candidate for a faithful linear representation of the braid group on $n$ strands $B_n$ and it has been known for long to be faithful in the case of the 3-strand braid group~\\cite{magnuspeluso}. \nHowever, Moody \\cite{moody} showed that the Burau representation is not faithful for any braid index $n\\geqslant 9$. This was brought down to $n\\geqslant 6$ by Long and Paton \\cite{lp} and finally Bigelow showed the non-faithfulness of $\\rho_5$ \\cite{bigelow}. Despite these negative results, the linearity question of the braid groups was settled in the positive independently by Krammer \\cite{krammer} and Bigelow \\cite{bigelowlinear}. They showed that another linear representation \n$$\\mathcal L_n:\\ \\ B_n\\longrightarrow GL\\left(\\frac{n(n-1)}{2},\\mathbb Z[q^{\\pm 1},t^{\\pm 1}]\\right)$$ \nconstructed by Lawrence \\cite{lawrence} is faithful for all $n$. The representation $\\mathcal{L}_{n}$ is now known as the \\emph{Lawrence-Krammer-Bigelow representation}, or LKB representation for short.\n\nAt present, the question of the faithfulness of the Burau representation in the case $n=4$ remains open. \nThe linearity question itself was solved, nevertheless the problem to determine whether $\\rho_4$ is faithful or not remains of considerable importance:\na negative answer would be of great interest in \nquantum topology, since it is equivalent to the non-faithfulness of Jones and Temperley-Lieb representations of $B_{4}$, and would\nprovide a non-trivial knot with trivial Jones polynomial \\cite{bi3}.\nAnother interesting related problem is to study the image of the Burau representation -- an old question asks which $(n-1)\\times (n-1)$ matrices over $\\Z[q^{\\pm 1}]$ can appear as the image under the (reduced) Burau representation of some braid \\cite{birman}; it is widely open.\n\n\n\nThe present paper aims to establish relations between the Garside structures of the braid group and the Burau representation. Our motivation was to understand to what extent the Burau representation is close to be faithful, and when faithfulness property breaks down. This not only helps to attack the faithfulness problem of the 4-strand Burau representation, but also provides new insights for the image and the kernel of the Burau representation for arbitrary braid index, even for the simplest case $n=3$ (see Corollary \\ref{cor:3strands} below). \n\n\n\n\nThe \\emph{classical Garside structure}\nconsists in a lattice structure together with a special element $\\Delta$ satisfying some properties initially discovered by Garside in \\cite{garside}. \nA crucial output of this structure is the\n\\emph{classical (left) normal form} of a braid $x$, which is a unique decomposition of the form \n$$N_{\\sf c}(x) = \\Delta^p s_1\\cdots s_r$$\n in which the factors belong to the set of the so-called \\emph{simple elements}. \nThe \\emph{classical} \\emph{supremum} and \\emph{infimum} of $x$ are defined by $\\sup_{\\sf c}(x)= p+r$, $\\inf_{\\sf c}(x)=p$, respectively. The \\emph{classical canonical length} of $x$ is defined by $\\ell_{\\sf c}(x)=r$; \nthe \\emph{classical Garside length} $l_{\\sf c}(x)$ is the length of $x$ with respect to the simple elements. The latter satisfies\n $l_{\\sf c}(x)=\\max(\\sup_{\\sf c}(x),0)-\\min(\\inf_{\\sf c}(x),0)$.\n\nSlightly different but very close in spirit is the \\emph{dual Garside structure} (or BKL structure) discovered by Birman, Ko and Lee \\cite{bkldual} which leads to the \\emph{dual (left) normal form} of a braid $x$:\n$$N_{\\sf d}(x) = \\delta^p d_1\\cdots d_r$$\nwhere the factors belong to the set of the so-called \\emph{dual simple elements}.\nThe dual \\emph{supremum}, \\emph{infimum} and \\emph{canonical length} of a braid $x$ are defined similarly and denoted by $\\sup_{\\sf d}(x)$, $\\inf_{\\sf d}(x)$ and $\\ell_{\\sf d}(x)$ respectively. The \\emph{dual Garside length} $l_{\\sf d}(x)$ is the length of $x$ with respect to the dual simple elements; it satisfies $l_{\\sf d}(x)=\\max(\\sup_{\\sf d}(x),0)-\\min(\\inf_{\\sf d}(x),0)$. \nSee Section \\ref{section:garside} for more details on both classical and dual Garside structures of the braid group.\n\n\nOur first main result provides a non-vanishing criterion for the Burau representation $\\rho_4$ using the classical Garside structure.\n\\theoremstyle{plain}\n\\newtheorem*{thmMainclassical}{Theorem \\ref{theorem:main_classical}}\n\\begin{thmMainclassical}\nIf the classical left normal form of a 4-braid $x$\ndoes not contain a factor $(\\sigma_{2}\\sigma_{1}\\sigma_{3})$\nthen $\\rho_{4}(\\beta) \\neq 1$. \n\\end{thmMainclassical}\n\n\nIn the dual framework, we obtain more general and strong connections.\nFor a non-zero Laurent polynomial $\\Lambda$ in the variable $q$, \nlet us denote by $m(\\Lambda)$ and $M(\\Lambda)$ the minimal and maximal degrees of the variable, respectively. As a convention, we define $m(0)= +\\infty$ and $M(0)=-\\infty$.\n\n For a matrix $\\Lambda=(\\Lambda_{ij})\\in GL\\left(n-1,\\mathbb Z[q^{\\pm 1}]\\right)$, \nwe set $$m(\\Lambda)=\\min \\{m(\\Lambda_{ij}), 1\\leqslant i,j\\leqslant n-1\\},\\ \\textrm{and} \\ M(\\Lambda)=\\max \\{M(\\Lambda_{ij}, 1\\leqslant i,j\\leqslant n-1\\}.$$\n\n\n\nIn Section \\ref{section:dual} we will introduce a notion of \\emph{simply-nested} braid; roughly speaking, simply-nestedness is a local condition \non the factors of the dual left normal form of a braid. \nWe will show that the Burau representation completely determines the normal form of simply-nested braids.\n\n\n\n\\newtheorem*{ThmMainDual}{Theorem \\ref{theorem:main_dual}}\n\\begin{ThmMainDual}\nLet $x \\in B_n$ be a simply-nested braid. \n\\begin{enumerate}\n\\item[(i)] $\\sup_{\\sf d}(x) = M(\\rho_n(x))$.\n\\item[(ii)] One can compute the dual normal form from the matrix $\\rho_{n}(x)$, so the restriction of the Burau representation on the set of simply-nested braid $B_{n}^{\\sf sn}$ is injective.\n\\end{enumerate} \\end{ThmMainDual}\n\nThis provides several consequences for faithfulness questions in general.\nFirst of all, it follows that the Burau matrix of a 3-strand braid completely determines its dual normal form.\n\\newtheorem*{Corollary3strds}{Corollary \\ref{cor:3strands}}\n\\begin{Corollary3strds}\nLet $x\\in B_3$. Then \n\\begin{enumerate}\n\\item[(i)] $\\sup_{\\sf d}(x) = M(\\rho_{3}(x))$,\n\\item[(ii)] $\\inf_{\\sf d}(x) = m(\\rho_{3}(x))$.\n\\item[(iii)] One can compute the dual normal form of $x$ from the matrix $\\rho_{3}(x)$.\n\\end{enumerate}\\end{Corollary3strds}\n\n\nFor the 4-strand braid group, we will see:\n\n\\newtheorem*{Corollary4stds}{Corollary \\ref{cor:4strands}}\n\\begin{Corollary4stds}\nLet $x\\in B_4$ and $N_{\\sf d}(x)=\\delta^pd_1\\cdots d_r$. Assume that for all $i=1,\\ldots,r-1$,\n$(d_i,d_{i+1})$ is not in the following list:\n$$ \\left\\{ \\begin{array}{c}(a_{1,2}a_{3,4},a_{2,4}),(a_{1,2}a_{3,4},a_{3,4}a_{2,3}),(a_{1,2}a_{3,4},a_{1,2}a_{1,4}),\\\\\n\\ (a_{2,3}a_{1,4},a_{1,3}), (a_{2,3}a_{1,4},a_{1,3}a_{2,3}),(a_{2,3}a_{1,4},a_{1,3}a_{1,4}) \\end{array}\\right\\}$$\n\nThen \n\\begin{enumerate}\n\\item[(i)] $\\sup_{\\sf d}(x) = M_{q}(\\rho_{4}(x))$,\n\\item[(ii)] one can compute the dual normal form of $x$ from the matrix $\\rho_4(x)$. \n\\end{enumerate}\nIn particular, if the dual left normal form of a 4-braid $x$ \ndoes not contain a factor $(a_{1,2}a_{3,4})$ or $(a_{2,3}a_{1,4})$ then $\\rho_{4}(\\beta) \\neq 1$.\n\\end{Corollary4stds}\n\nFinally we give Garside-theoretical constraints for braids of arbitrary braid index to belong to the kernel of the Burau representation. \nLet $e:B_n \\rightarrow \\mathbb Z$ be the abelianization map. \n\\newtheorem*{Corollarynstds}{Corollary \\ref{cor:nstrands}}\n\\begin{Corollarynstds}\nLet $x\\in B_{n}$ be a non-trivial braid and $N_{\\sf d}(x)=\\delta^{p}d_{1}\\cdots d_{r}$. If there exists $r'\\leqslant r$ such that\n\\begin{enumerate}\n\\item[(i)] The subword $x_{r'}=\\delta^{p}d_{1}\\cdots d_{r'}$ is simply-nested,\n\\item[(ii)] $r' > e(d_{r'+1}\\cdots d_{r})$,\n\\end{enumerate}\nthen $\\rho_n(x)\\neq 1$.\\end{Corollarynstds}\n\nThus, we conclude that if a braid $x$ is sufficiently close to simply nested braids, then its Burau matrix is never trivial.\n\nNow we explain the organization of the paper. \nIn section \\ref{section:garside} we recall Garside theoretical no(ta)tions to be used later. \nSection \\ref{section:classical} shows Theorem \\ref{theorem:main_classical}.\nSections \\ref{section:diagrams}-\\ref{section:dual} are devoted to the proof of Theorem \\ref{theorem:main_dual}. This \ncan be sketched as follows. First we recall from \\cite{iw} the wall-crossing labeling of the curve diagram of a braid and how it is related to the dual Garside normal form (Section \\ref{section:diagrams}). Section \\ref{section:burau} reviews a homological interpretation of the reduced Burau representation; in this context we show how the Burau matrix is related to the wall-crossing labeling.\nWall-crossing labeling therefore serves as a bridge between Burau representation and the dual Garside structure. \nFinally, Section \\ref{section:dual} introduces the notion of simply-nestedness and proves Theorem \\ref{theorem:main_dual}\nand its above mentionned corollaries.\n\n\n\n\n\\section{Reminders on the Garside structures of braid groups}\\label{section:garside}\n\nLet $\\mathbb D^2$ be the closed disk in $\\mathbb C$ with diameter the real segment $[0,n+1]$ and $\\mathbb D_n$ be the $n$-times punctured disk: \n$\\mathbb D_{n}=\\mathbb D^2-\\{1,\\ldots,n \\}$. \nWe denote the $i$-th puncture point $i \\in \\C$ by $p_{i}$ and put $p_0=0\\in \\mathbb C$.\nAs is well-known, the braid group $B_{n}$ is identified with the mapping class group of $\\mathbb D_{n}$ (with boundary fixed pointwise).\nWe identify the standard Artin generator $\\sigma_{i}$ ($i=1,\\ldots,n-1$) with the \\emph{left-handed} (that is, \\emph{clockwise}) half Dehn twist along the real segment $[i,i+1]$. Throughout the paper we will consider braids acting on the \\emph{right}.\n\n\n\nFor $1\\leqslant i\\neq j\\leqslant n$, we denote by $a_{i,j}$ (or $a_{j,i}$ indifferently) the isotopy class of the left-handed half Dehn twist along an arc connecting the punctures $p_i$ and $p_j$ through the lower part of the disk $\\{z \\in \\mathbb D^{2}\\: | \\: \\textrm{Im}\\, z < 0\\}$. Using the Artin generators, $a_{i,j}$ $(iM_2$ and $M_1>M_3+1$, \n\\item if $S(s_1)=\\{2\\}$ then $M_2\\geqslant M_1$ and $M_2>M_3$, \n\\item if $S(s_1)=\\{3\\}$ then $M_3\\geqslant M_1$ and $M_3\\geqslant M_2$, \n\\item if $S(s_1)=\\{1,2\\}$ then either $M_1>M_2$ and $M_1>M_3+1$, or $M_2\\geqslant M_1$ and $M_2>M_3$,\n\\item if $S(s_1)=\\{2,3\\}$ then either $M_2\\geqslant M_1$ and $M_2>M_3$, or $M_3\\geqslant M_1$ and $M_3\\geqslant M_2$, \n\\item if $S(s_1)=\\{1,3\\}$ then either $M_1>M_2$ and $M_1>M_3$,\nor $M_3\\geqslant M_1$ and $M_3\\geqslant M_2$.\n\\end{itemize}\nMoreover, the following inequality holds: $\\sup_{\\sf c}(x)\\leqslant M(\\rho_4(x))\\leqslant 3\\sup_{\\sf c}(x)$.\n\\end{lemma}\n\n\n\n\\begin{proof}[Proof of Lemma \\ref{lem:classicalLemme}]\nThe proof is by induction on $r$.\nA direct calculation shows that all conclusions are correct for the case $r=2$. Here we remark that in the case $r=1$ and $s_{1}=\\sigma_{1}$ ($S(s_1)=\\{1\\}$), the conclusion does not hold since $M_1(\\sigma_1)=M_3(\\sigma_1)+1$.\n\nSuppose now $r>2$. Write $x=s_1x'$; by induction $x'$ satisfies the conclusions of the lemma. \nWe now distinguish 6 cases, according to the possible values of $S(s_1)$. \nIn each case, there are several possibilities for\n$s_1$. Each of them leads to conditions on the \nstarting set of $s_2$, the first factor of $x'$, because of the left-weightedness condition on the pair $(s_1,s_2)$.\nBy induction hypothesis this gives relations between the integers $M'_i:=M_i(x')$. \nIn each case, using the explicit computation of $\\rho_4(s_1)$, we express the integers $M_i=M_i(s_1x')$ in terms of the $M'_i$ and show that they satisfy the expected relations. In each case, the computations to be performed show that \n$M(\\rho_4(x'))+1\\leqslant M(\\rho_4(s_1x'))\\leqslant M(\\rho_4(x'))+3$; this shows the last claim in the lemma.\n\nWe present the cases $S(s_1)=\\{2\\}$ and $S(s_1)=\\{1,3\\}$; this will have the advantage to show the failure in the argument when a factor $\\sigma_2\\sigma_1\\sigma_3$ appears. Other cases are proven similarly.\n\n{\\bf Case $S(s_1)=\\{2\\}$.} The simple element $s_1$ is one of the following: $\\sigma_2$, $\\sigma_2\\sigma_1$, $\\sigma_2\\sigma_3$,\n$\\sigma_2\\sigma_1\\sigma_3\\sigma_2$ or $\\sigma_2\\sigma_1\\sigma_3$. We treat two examples; again the three others are dealt with similarly. \n\n\n{\\underline{Suppose $s_1=\\sigma_2$.}} Then $F(s_1)=\\{2\\}$ and by left-weightedness $S(s_2)=\\{2\\}$. By induction, \nwe have $M'_2\\geqslant M'_1$ and $M'_2>M'_3$. Multiplying $\\rho_4(x')$ on the left by $\\rho_4(\\sigma_2)=\\begin{pmatrix}\n1 & q & 0\\\\\n0 & -q & 0\\\\\n0 & 1 & 1\\\\\n\\end{pmatrix}$, the new degrees $M_i$ in the product satisfy $M_1=M'_2+1$, $M_2=M'_2+1$ and $M_3\\leqslant M'_2$ (possibly the terms of highest degrees in the second and third row of $\\rho_4(x')$ cancel with each other). \nTherefore we have $M_2=M_1$ and $M_2>M_3$, thus satisfying the expected conditions when $S(s_1)=\\{2\\}$. \n\n{\\underline{Suppose $s_1=\\sigma_2\\sigma_1\\sigma_3$}}. \nThen $F(s_1)=\\{1,3\\}$ and by left-weightedness, $S(s_2)=\\{1\\},\\{3\\}$ or $\\{1,3\\}$.\nBy induction $M'_3\\geqslant M'_1,M'_2$ or $M_1>M_2,M_3$ (with possibly $M_1>M_3+1$). \nComputing $\\rho_4(\\sigma_2\\sigma_1\\sigma_3)=\\begin{pmatrix} \n0 & q & q^2\\\\\n-q & -q & -q^2\\\\\n1 & 1 & 0\\\\\n\\end{pmatrix}$ we get in the first case $M_1=M'_3+2$, $M_2=M'_3+2$ and $M_3\\leqslant \\max(M'_1,M'_2)\\leqslant M'_3$; whence $M_2\\geqslant M_1$ and $M_2>M_3$. \nIn the second case, unless the strongest inequality $M'_1>M'_3+1$ holds, there is no reason why \na cancellation could not yield $M_3\\geqslant M_2$. Therefore the desired conclusion ($M_2>M_3$) possibly does not hold and we see that the argument fails when $\\sigma_2\\sigma_1\\sigma_3$ is a factor of $x$. \n\n{\\bf Case $S(s_1)=\\{1,3\\}$.} Then $s_1$ is $\\sigma_1\\sigma_3$, $\\sigma_1\\sigma_3\\sigma_2$, $\\sigma_1\\sigma_3\\sigma_2\\sigma_1$, $\\sigma_1\\sigma_3\\sigma_2\\sigma_3$ or $\\sigma_1\\sigma_3\\sigma_2\\sigma_3\\sigma_2$. \n\n{\\underline{Suppose $s_1=\\sigma_1\\sigma_3$.}} \nWe compute $\\rho_4(\\sigma_1\\sigma_3)=\\begin{pmatrix}\n-q & 0 &0 \\\\\n1 & 1 & q \\\\\n0 & 0 & -q\\\\\n\\end{pmatrix}.$\nBy left-weightedness, we have $S(s_2)\\subset \\{1,3\\}$ and therefore \nby induction the $M'_i$ satisfy \n$M'_1>M'_2,M'_3$ or $M'_3\\geqslant M'_1,M'_2$. \nIn the first case $x=\\sigma_1\\sigma_3x'$ satisfies \n$M_1=M'_1+1$, $M_2\\leqslant M'_1$ and $M_3=M'_3+1$: \nwe have, as expected, $M_1>M_2,M_3$. In the second case, we have $M_1=M'_1+1,M_2=M'_3+1,M_3=M'_3+1$ whence $M_3\\geqslant M_1,M_2$. \n\n{\\underline{Suppose $s_1=\\sigma_1\\sigma_3\\sigma_2$.}}\nThen we have to check the product of the matrix $\\rho_4(\\sigma_1\\sigma_3\\sigma_2)=\\begin{pmatrix}\n-q & -q^2 & 0 \\\\\n1 & q & q \\\\\n0 & -q & -q\\\\\n\\end{pmatrix}$ by $\\rho_4(x')$, where the $M'_i$ satisfy by induction $M'_2\\geqslant M'_1$ and $M'_2>M'_3$. \nThis gives $M_1=M'_2+2$, $M_2=M'_2+1$ and $M_3=M'_2+1$ whence $M_1>M_2,M_3$. \n\n{\\underline{Suppose $s_1=\\sigma_1\\sigma_3\\sigma_2\\sigma_1$.}}\nCompute $\\rho_4(\\sigma_1\\sigma_3\\sigma_2\\sigma_1)=\\begin{pmatrix}\n0 & -q^2 & 0 \\\\\n0 & q & q\\\\\n-q & -q & -q \\\\\n\\end{pmatrix}$. \nOn the other hand we have by induction one of the following set of conditions on $x'$: \n$M'_1>M'_2$ and $M'_1>M'_3+1$; or $M'_2\\geqslant M'_1$ and $M'_2>M'_3$.\nIn the first case we obtain $M_1=M'_2+2,M_2\\leqslant \\max(M'_2+1,M'_3+1)$ and $M_3=M'_1+1$ whence \n$M_3>M_1,M_2$. In the second case we get $M_1=M'_2+2,M_2=M'_2+1$ and $M_3\\leqslant M'_2+1$ whence \n$M_1>M_2,M_3$. \n\n{\\underline{Suppose $s_1=\\sigma_1\\sigma_3\\sigma_2\\sigma_3$.}}\nCompute $\\rho_4(\\sigma_1\\sigma_3\\sigma_2\\sigma_3)=\\begin{pmatrix}\n-q & -q^2 & -q^3\\\\\n1 & q & 0 \\\\\n0 & -q & 0\\\\\n\\end{pmatrix}.$\nBy induction hypothesis, as $S(S_2)\\subset F(s_1)=\\{2,3\\}$, we have $M'_2\\geqslant M'_1$ and $M'_2>M'_3$ or $M'_3\\geqslant M'_1,M'_2$. \nIn the first case: \n\\begin{itemize}\n\\item if $M'_2>M'_3+1$ then $M_1=M'_2+2$, $M_2=M'_2+1=M_3$ whence $M_1>M_2,M_3$,\n\\item if $M'_2=M'_3+1$ then $M_1\\leqslant M'_2+2$ and $M_2=M'_2+1=M_3$ whence we get $M_1>M_2,M_3$ if $M_1=M'_2+2$ and $M_3\\geqslant M_1,M_2$ if $M_1M_2,M_3$. \n\n{\\underline{Suppose $s_1=\\sigma_1\\sigma_3\\sigma_2\\sigma_1\\sigma_3$.}}\nThe reduced Burau matrix of $s_1$ is \n$\\begin{pmatrix}\n0 & -q^2 & -q^3\\\\\n0 & q & 0 \\\\\n-q & -q & 0\\\\\n\\end{pmatrix}$. \nOn the other hand $F(s_1)=\\{1,3\\}$ whence by induction $x'$ satisfies: \n$M'_1>M'_2,M'_3$ or $M'_3\\geqslant M'_1,M'_2$. \nIn the first case we get $M_1\\leqslant \\max(M'_2+2,M'_3+3)$, $M_2=M'_2+1$, $M_3=M'_1+1$. This implies $M_3\\geqslant M_1,M_2$ provided \n$M'_1>M'_3+1$ holds. If on the contrary $M'_1=M'_3+1$ we can say more about $M_1$ (actually there will be no cancellation there) because the inequality $M'_1>M'_2$ then implies $M'_3+1>M'_2$ whence $M_1=M'_3+3$. This finally shows $M_1>M_2,M_3$. \nIn the second case we obtain $M_1=M'_3+3$, $M_2=M'_2+1$ and $M_3\\leqslant \\max(M'_1+1,M'_2+1)$ whence \n$M_1>M_2,M_3$. \n\n\n\\end{proof}\n\n\n\\begin{example} \nWe show that the conclusion for $S(s_1)=\\{2\\}$ in Lemma \\ref{lem:classicalLemme} does not necessarily hold if \n$s_1=\\sigma_2\\sigma_1\\sigma_3$. \nIndeed, let\n$x=\\sigma_2\\sigma_1\\sigma_3\\cdot\\sigma_1\\sigma_3\\sigma_2\\sigma_1\\cdot\\sigma_1\\sigma_2\\sigma_1\\sigma_3\\cdot\\sigma_1\\sigma_3\\cdot\\sigma_1\\sigma_2\\cdot \\sigma_2$. This braid has infimum 0 and is in normal form as written; the degrees of the entries of its Burau matrix are indicated in the following matrix:\n$\\begin{pmatrix} 5 & 8 & 7\\\\\n6 & 7 & 7\\\\\n5 & 7 & 2\\\\\n\\end{pmatrix}$.\n\\end{example}\nLemma \\ref{lem:classicalLemme} leads to the following non-vanishing criterion for the reduced Burau representation of 4-braids. \n\n\\begin{theorem}\\label{theorem:main_classical}\nIf the classical left normal form of a 4-braid $x$\ndoes not contain a factor $(\\sigma_{2}\\sigma_{1}\\sigma_{3})$\nthen $\\rho_{4}(x) \\neq 1$. \n\\end{theorem} \n\n\\begin{proof}\nLet $N_{\\sf c}(x)=\\Delta^{p}s_{1}\\cdots s_{r}$.\nIt is easy to check that if $r\\leq 2$, then $\\rho_4(x) \\neq 1$ so we may assume $r > 2$.\n\nFirst we observe that \n$$ \\rho_4(\\Delta)=\n\\begin{pmatrix}\n0 & 0 & -q^{3}\\\\\n0 & -q^{2} & 0\\\\\n-q & 0 & 0\\\\\n\\end{pmatrix}$$\nhence $M_{1}(\\Delta x) = M_{3}(x)+3$, $M_{2}(\\Delta x) = M_{2}(x)+2$ and $M_{3}(\\Delta x) = M_{1}(x)+1$.\n\nAssume that $S(s_{1})\\neq \\{1,3\\}$.\nIf $p$ is even, then by conjugating by $\\Delta$ if necessary, we may assume that $S(s_{1})=\\{1\\}, \\{2\\}$, or $\\{1,2\\}$. By Lemma \\ref{lem:classicalLemme}, $\\rho_4(s_1\\cdots s_r)$ is not an homothety hence $\\rho_4(\\Delta^ps_1\\cdots s_r) \\neq 1$. If $p$ is odd, we may assume similarly that $S(s_{1})=\\{2\\}, \\{3\\}$, or $\\{2,3\\}$ hence by Lemma \\ref{lem:classicalLemme}, \n$M_2(s_1\\cdots s_r)\\geqslant M_1(s_1\\cdots s_r)$ or $M_3(s_1\\cdots s_r)\\geqslant M_1(s_1\\cdots s_r)$. \nOn the other hand, $\\rho_4(x)=1$ implies \n$$M_{3}(s_{1}\\cdots s_{r})+2 = M_{2}(s_{1}\\cdots s_{r})+1 = M_{1}(s_{1}\\cdots s_{r}),$$\nwhich is a contradiction.\n\nNow we consider the case $S(s_{1})=\\{1,3\\}$. Assume for a contradiction that $\\rho_4(x)=1$. This implies in particular $M_{i}(yxy')=M_{i}(yy')$ for any 4-braids $y$ and $y'$. We deduce a contradiction by finding appropriate braids $y$ and $y'$. \n\n\\textbf{Case 1:} $s_{1} = \\sigma_{1}\\sigma_{3}$.\nIf $p$ is even, put $y=\\Delta\\sigma_{2}\\sigma_{1}\\sigma_{3}\\sigma_{2}$: $N_{\\sf c}((\\Delta\\sigma_{2}\\sigma_{1}\\sigma_{3}\\sigma_{2})x) = \\Delta^{p+2} s_{2}\\cdots s_{r}$. By direct calculation \nand under our hypothesis that $\\rho_4(x)=Id$,\n$$\\rho_4(\\Delta^{p+2} s_{2}\\cdots s_{r})=\\rho_4(\\Delta\\sigma_{2}\\sigma_{1}\\sigma_{3}\\sigma_{2})=\n\\begin{pmatrix}\n-q^3 & 0 & 0\\\\\nq^3 & q^{4} & q^{4}\\\\\n0 & 0 & -q^3\\\\\n\\end{pmatrix}.$$\nIt follows that $M_2(s_2\\ldots s_r)=M_1(s_2\\ldots s_r)+1=M_3(s_2\\ldots s_r)+1$; contradicting Lemma \\ref{lem:classicalLemme}\napplied with $S(s_2)\\subset \\{1,3\\}$ (which implies in particular $M_20 \\: \\}$. The lines $W_i$ are called the {\\em walls}, and their union $\\bigcup_{i} W_{i}$ is denoted $W$. Let $U_i$ be a disk-neighborhood of the puncture $p_i$ and set $U = \\bigcup_{i}U_{i}$. See Figure \\ref{fig:curvediagram} (b), (c).\n\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[scale=1.2]{curvediagram_0_2.eps}}\n \\caption{Curve diagram and walls}\n \\label{fig:curvediagram}\n\\end{figure}\n\nThe (\\emph{total}) \\emph{curve diagram} of a braid $x$ is the respective image of $E$ (or $\\widebar E$) under a diffeomorphism $\\phi$ representing $x$ which satisfies:\n\\begin{enumerate}\n\\item $(\\widebar E)\\phi$ coincides with the real line on $U$,\n\\item $(\\widebar{E})\\phi$ is transverse to $W$ and the number of intersections of $(\\widebar{E})\\phi$ with $W$ is as small as possible (which is equivalent to saying that $(\\widebar E)\\phi$ and $W$ do not bound together any bigon \\cite{fgrrw}).\n\\end{enumerate}\n\n\nThe (total) curve diagram is uniquely defined up to isotopy of $\\mathbb D_n$ that fixes $\\partial \\mathbb D_n$. We denote by $D_x$ ($\\widebar{D_x}$ respectively) the (total) curve diagram of a braid $x$.\nFigure \\ref{fig:curvediagram} (c) shows the (total) curve diagram of the braid $\\sigma_1\\in B_4$; according \nto our previous convention, dashed line represents the image of the initial segment $E_0$. \n\nAn \\emph{arc segment} (or simply an arc) of the (total) curve diagram $D_x$ (or $\\widebar {D_x}$) is a connected component of $D_x-(W\\cup U)$ (or $\\widebar{D_x}-(W\\cup U)$). \nNotice that an arc segment of $\\widebar{D_x}$ is in one of the three following cases:\n\\begin{itemize}\n\\item it connects two walls $W_i$ and $W_j$,\n\\item it connects a wall $W_i$ and a puncture $p_j$ (more precisely the neighborhood $U_j$), \n\\item it connects two punctures $p_i$ and $p_j$ (more precisely the neighborhoods $U_i$ and $U_j$).\n\\end{itemize}\nIn all cases, $i\\neq j$ by construction of the curve diagram. We denote such an arc segment, in either case, by $\\wideparen{(ij)}$. \nUnless explicitly specified, we will not care about the orientation of an arc segment; this is reflected in our notation. \n\n\\subsection{Wall-crossing labeling and dual normal form}\n\nWe now describe the wall-crossing labeling. To that purpose, we need to introduce a modified version of the curve diagrams. \n\nLet $x\\in B_n$. Around each puncture $p_{i}$ distinct from the image of $p_n$ under $x$, we modify the total curve diagram $\\widebar {D_{x}}$ inside the neighborhood $U_i$ as shown in Figure \\ref{fig:modcurve} (a). \nWe denote the resulting (total) curve diagram by $MD_{x}$ $(\\widebar{MD_{x}})$, and call it the (\\emph{total}) \\emph{modified curve diagram} of $x$.\nFigure \\ref{fig:modcurve} (b) shows the (total) modified curve diagram of $\\sigma_{1}\\in B_4$.\n\n\\begin{figure}[htbp]\n \\begin{center}\n\\includegraphics*[scale=1]{modcurve_0_2.eps}\n \\caption{Modified curve diagrams}\n \\label{fig:modcurve}\n \\end{center}\n\\end{figure}\n\nTake a smooth parametrization of $\\widebar{MD_x}$, viewed as the image of a function $\\gamma\\co [0,1] \\rightarrow \\mathbb D^{2}$. For each connected component $\\alpha$ of $\\widebar{MD_{x}}- (W\\cup U)$, we assign the algebraic intersection number of $W$ and the arc $\\gamma([0,v])$, where $v \\in [0,1]$ is taken so that $\\gamma(v) \\in \\alpha$. \nNotice that a connected component of $\\widebar{MD_{x}}-(W\\cup U)$ naturally corresponds to an arc segment of $\\widebar {D_x}$, since \n$\\widebar{MD_{x}}$ and $\\widebar{D_{x}}$ are identical except on $U$. This allows to attribute a label to each arc segment of $\\widebar {D_x}$; this integer-valued labeling is called the \\emph{wall-crossing labeling} of $x$. We define\n$\\LWcr(x)$ and $\\SWcr(x)$ as the largest and smallest possible labels occuring in the wall-crossing labeling for arc segments \\emph{in the curve diagram} $D_x$, respectively. \n\nNotice that to define $\\LWcr$ and $\\SWcr$, we used the largest and smallest labels only of the curve diagram $D_{x}$, not the total curve diagram $\\widebar{D_{x}}$. However, in order to determine the wall crossing labelings we need to consider the total curve diagram. \n\nThe following relates the wall-crossing labeling with the dual length of a braid:\n\\begin{theorem}\\cite[Theorem 3.3]{iw}\n\\label{theorem:wall}\nFor a braid $x \\in B_{n}$, we have the following equalities: \n\\begin{enumerate}\n\\item $\\sup_{\\sf d}(x) = \\LWcr(x)$.\n\\item $\\inf_{\\sf d}(x) = \\SWcr(x)$.\n\\end{enumerate}\n\\end{theorem}\n\nHere we show a stronger result than Theorem \\ref{theorem:wall}, which is suggested by and is implicit in the proof of \\cite[Theorem 3.3]{iw}: one can read not only supremum, infimum, but also dual Garside normal form from the curve diagram.\nRecall from Section \\ref{section:garsidedual} the lattice ordering $\\preccurlyeq_{\\sf d}$ on $B_n$. \n\n\\begin{theorem}\n\\label{theorem:finalfactor}\nLet $x \\in B_{n} $ be a braid and put $\\ell=\\LWcr(x)-\\SWcr(x)$.\nFor $k=\\ell,\\ldots,1$, we define $d_{k}$ inductively as follows: \n\\begin{enumerate}\n\\item $d_{\\ell}$ is the least common multiple (with respect to $\\preccurlyeq_{\\sf d}$) of all letters $a_{i,j}$ such that the curve diagram $D_{x}$ contains an arc segment $\\wideparen{(i j)}$ with wall-crossing labeling $\\LWcr(x)$.\n\\item $d_{k}$ is the least common multiple (with respect to $\\preccurlyeq_{\\sf d}$) of all letters $a_{i,j}$ such that the curve diagram $D_{x d_{\\ell}^{-1}\\cdots d_{k+1}^{-1}}$ contains an arc segment $\\wideparen{(i j)}$ with wall-crossing labeling $(k+\\SWcr(x))$.\n\\end{enumerate}\nThen the dual normal form of $x$ is given by\n$$ N_{\\sf d}(x)=\\delta^{\\SWcr(x)}d_1\\ldots d_{\\ell}.$$.\n\\end{theorem}\n\n\nBefore proving Theorem \\ref{theorem:finalfactor}, we review from \\cite{iw} the description of how the action of a dual simple element affects the curve diagram of a braid and its wall-crossing labeling. This was the key of the proof of Theorem \\ref{theorem:wall}.\n\n\nDealing with the dual Garside structure, it will be convenient to work with the \nmodel of the punctured disk described in Section \\ref{section:garsidedual}; in that context the wall $W_i$ is the shortest straight segment connecting the puncture $p_i$ to the boundary, oriented outwards. Notice that the isotopy involved in the change of model for the punctured disk does not affect the wall-crossing labeling since the latter \nis defined in terms of algebraic intersection of arcs and walls. \n\n\nLet $x\\in B_n$; let $d$ be a dual simple element. Write $d=P_1\\cdots P_r$ the decomposition of $d$ into a product of disjoint polygons. For $i=1,\\ldots,r$, let $N_i$ be a regular neighborhood of the polygon $P_i$ in $\\mathbb D_n$. Let $A_i$ be an annulus which is a regular neighborhood of the boundary of $N_i$. Suppose moreover that $A_i$ is chosen so that none of its two boundary components forms a bigon together with the walls $W$ or the diagram $D_{x}$ and so that as many intersection points of $D_{x}$ and $W$ as possible lie in $A_i$.\n\nNow $D_{xd}$ and its wall-crossing labeling are obtained as follows. The respective actions of each of the polygons $P_i$ are independent; each of them acts non-trivially only on the inner complementary component of the corresponding annulus and on the annulus itself (where the diagram just describes a spiral). \nFor each $i=1,\\ldots,r$, $N_i$ is turned by one notch in the clockwise direction and all labels are increased by one; on the annulus $A_i$, $D_{xd}$ and the corresponding labels are interpolated linearly; see Figure \\ref{fig:Paction}. \nThe action of the inverse of a dual simple element can be described in a very similar way, the twisting on $N_i$ being in the opposite direction, and all labels being decreased by one. \n\n\\begin{figure}[hbtp] \n \\begin{center}\n\\includegraphics*[scale=0.5, width=110mm]{Paction_0_1.eps}\n\\caption{How to draw curve diagram of $xd$ from $D_{x}$}\\label{fig:Paction}\n \\end{center}\n\\end{figure}\n\n\\begin{proof}[Proof of Theorem \\ref{theorem:finalfactor}]\nWe prove the theorem by induction on $\\ell=\\LWcr(x)-\\SWcr(x)$. \nWhen $\\ell \\leqslant 1$, the result is explicitly contained in the proof of Lemma 3.5 in \\cite{iw}. \nSuppose that $\\ell\\geqslant 2$. By induction, it is sufficient to show that $(d_{\\ell-1},d_{\\ell})$ is left-weighted. \n\nWe check the left-weightedness using Proposition \\ref{prop:left-weighted}.\nWrite the dual simple elements $d_{\\ell}$ and $d_{\\ell-1}$ as products of disjoint polygons: $d_{\\ell-1}=P_1\\cdots P_{r_{\\ell-1}}$ and $d_{\\ell}=Q_{1}\\cdots Q_{r_{\\ell}} $, respectively.\nLet $i,j$ be two vertices of some polygon $Q \\in \\{Q_1,\\ldots,Q_{r_{\\ell}}\\}$. We must show that there exists a polygon $P \\in \\{ P_1,\\ldots, P_{r_{\\ell}}\\}$ having vertices $k,l$ such that $a_{k,l}\\vdash a_{i,j}$. \nBy definition of $d_{\\ell-1}$, it is sufficient to show that $D_{xd_{\\ell}^{-1}}$ admits an arc segment $\\wideparen{(kl)}$ with label $\\LWcr(x)-1$ and such that $a_{k,l}\\vdash a_{i,j}$.\n\n\nAssume first that the diagram $D_x$ admits an arc segment $\\wideparen{(ij)}$ with label $\\LWcr(x)$. Then according to the description above of the action of the inverse of a polygon, the diagram $D_{xd_{\\ell}^{-1}}$ admits an arc segment $\\wideparen{(kl)}$ with label $\\LWcr(x)-1$ such that $k\\in (j,i-1)$ and $l\\in(i,j-1)$, as desired. Moreover, we notice that if $g_{i,j}$ and $h_{i,j}$ are the rightmost vertex of $Q$ in $(j,i-1)$ and $(i,j-1)$ respectively, then \n$k\\in (g_{i,j},i-1)$ and $l\\in (h_{i,j},j-1)$. See Figure \\ref{Fig:LW} (a).\n\nAssume now that $D_x$ does not have an arc segment $\\wideparen{(ij)}$ with label $\\LWcr(x)$.\nSince both $i$ and $j$ are vertices of $Q$, by definition of $Q$ there must exist arc segments $\\wideparen{(ib)},\\wideparen{(jc)}$ of $D_{x}$ with label $\\LWcr(x)$, for some punctures $b, c \\not \\in \\{i,j\\}$, possibly $b=c$.\n \nSuppose that such a puncture $b$ can be chosen so that \n$a_{b,i}a_{i,j}$ is a dual simple braid. This means that $b\\in (i+1,j-1)$. \nBut we have just seen that the action of $Q^{-1}$ produces an arc segment $\\wideparen{(kl)}$ labeled by $\\LWcr(x)-1$ in the diagram $D_{xd_{\\ell}^{-1}}$, such that $k\\in (j,i-1)$ (because the rightmost vertex of $Q$ in the arc $(b+1,i-1)$ certainly lies in the subarc $(j,i-1)$) and $l\\in (i,b-1)\\subset (i,j-1)$). Similarly, if $c$ can be chosen so that $a_{c,j}a_{i,j}$ is a dual simple braid, we get a pair of punctures $k,l$ with the expected property. See Figure \\ref{Fig:LW} (b).\n\n\n\\begin{figure}[hbtp] \n \\begin{center}\n\\includegraphics*[scale=0.5, width=120mm]{alphaMax.eps}\n\\caption{Proof of Theorem \\ref{theorem:finalfactor}; all arc segments represented are labeled $\\LWcr(x)$, crosses indicate vertices of the polygon $Q$.}\\label{Fig:LW}\n \\end{center}\n\\end{figure}\n\nFinally, suppose that no arc segment $\\wideparen{(bi)}$ nor $\\wideparen{(jc)}$ with labeling $\\LWcr(x)$ of $D_x$ has the above property. \nThen $b\\in (j+1,i-1)$, $c\\in (i+1,j-1)$ and $b\\neq c$. Among all $b$ so that $D_x$ admits an arc $\\wideparen{(bi)}$ labeled $\\LWcr(x)$, let $b_0$ be the leftmost one. Similarly, among all $c$ so that $D_x$ admits an arc $\\wideparen{(jc)}$ labeled $\\LWcr(x)$, let $c_0$ be the leftmost one. \nThe punctures $i,j,b_0$ and $c_0$ are all distinct and vertices of the polygon $Q$. By definition of $d_\\ell$, there must exist \nan arc segment $\\wideparen{(f_1f_2)}$ in $D_{x}$ with labeling $\\LWcr(x)$ such that $f_1\\in (j+1,b_0)$, $f_2\\in (i+1,c_0)$; the punctures $f_1,f_2$ are also vertices of $Q$. \nBut then $D_{xd_{\\ell}^{-1}}$ admits an arc $\\wideparen{(kl)}$ labeled $\\LWcr(x)-1$ with $k\\in(j,f_1-1)$ and $l\\in(i,f_2-1)$, thus with the required property. See Figure \\ref{Fig:LW} (c) (an example where $f_1=b_0$).\nThis completes the proof of Theorem \\ref{theorem:finalfactor}.\n\\end{proof}\n\n\n\n\\section{Burau representation}\\label{section:burau}\n\nIn this section we review a homological construction of the Burau representation; \nthis interpretation is used to relate the latter with the wall-crossing labeling.\n\n\\subsection{The Burau representation}\nFix the base point $\\ast=-(n+1) \\in \\partial \\mathbb D_{n}$ on the boundary of $\\mathbb D_n$. The fundamental group $\\pi_{1}(\\mathbb D_{n})$ is a free group of rank $n$ where the free generator $x_{i}$ is represented by a loop which rounds the $i$th puncture $p_i$ once clockwise.\nLet $\\epsilon: \\pi_{1}(\\mathbb D_{n}) \\rightarrow \\Z=\\langle q \\rangle$ be the homomorphism which sends all $x_{i}$ to the generator $q$. Geometrically, for a loop $\\gamma$, $\\epsilon([\\gamma])$ is the sum of the algebraic winding number of $\\gamma$ about the puncture points $\\{p_{i}\\}$ (in the clockwise direction).\n\nLet $\\pi: \\widetilde{\\mathbb D_{n}} \\rightarrow \\mathbb D_{n}$ be the infinite cyclic covering corresponding to $\\textrm{Ker}(\\epsilon)$, and fix a lift $\\widetilde{\\ast}$ of the base point.\nThe group of covering transformations of $\\widetilde{\\mathbb D_{n}}$ is identified with the cyclic group $\\left$.\nThen $H_{1}(\\widetilde{\\mathbb D_{n}};\\Z)$ can be endowed with a structure of $\\Z[q,q^{-1}]$-module, where multiplication by $q$ corresponds to the deck transformation. Moreover it turns out that $H_{1}(\\widetilde{\\mathbb D_{n}};\\Z)$ is free of rank $(n-1)$ as a $\\Z[q,q^{-1}]$-module. Since $\\epsilon$ is $B_{n}$-invariant, we have a linear representation \n\\[ \\rho: B_{n} \\rightarrow \\GL (H_{1}(\\widetilde{\\mathbb D_{n}};\\Z) ). \\]\nThis is called the {\\em (reduced) Burau representation}. In the rest of this section, we keep the same notation $\\epsilon$, $\\widetilde{\\mathbb D_n}$ and $\\widetilde{\\ast}$ for the above defined winding number evaluation morphism, covering space of $\\mathbb D_n$ and base point.\n\n\\subsection{Forks}\\label{section:forks}\nLet $Y$ be the $Y$-shaped graph \nconsisting of three external vertices: a distinguished one $r$, two others $v_{1}$ and $v_{2}$ and one internal vertex~$c$ and three edges relating \neach external vertex to the internal one (see Figure \\ref{fig:fork} (a)). \nWe orient the edges of $Y$ as shown in Figure \\ref{fig:fork} (a).\n\nA {\\em fork} is an embedded image of $Y$ into $\\mathbb D_n$ such that:\n\\begin{itemize}\n\\item All points of $Y- \\{r,v_1,v_2\\}$ are mapped to the interior of $\\mathbb D_n$.\n\\item The distinguished vertex $r$ is mapped to the base point $\\ast$.\n\\item The other two external vertices $v_{1}$ and $v_{2}$ are mapped to two different puncture points.\n\\end{itemize}\n\nGiven a fork $F$, the image of the edge $[r,c]$ is called the {\\em handle} of~$F$\nand the image of $[v_1,v_2]=[v_{1},c] \\cup [c,v_{2}]$, regarded as a single oriented arc, is called the {\\em tine} of $F$ and denoted by $T(F)$. The image of $c$ is called the \\emph{branch point} of~$F$.\nFigure \\ref{fig:fork} (b) shows a fork $F$ (with the handle depicted in grey line and the tine in black line).\n\n\n\n\\begin{figure}[htbp]\n \\begin{center}\n\\includegraphics*[scale=0.5, width=100mm]{fork_0_1.eps}\n \\caption{Fork and standard fork $F_i$}\n \\label{fig:fork}\n \\end{center}\n\\end{figure}\n\n\nFor a fork $F$, let $\\gamma \\co [0,1] \\rightarrow \\mathbb D_{n}$ be the handle of $F$, viewed as a path in $\\mathbb D_{n}$ and take a lift \n\\[ \\widetilde{\\gamma} \\co [0,1] \\rightarrow \\widetilde{\\mathbb D_{n}} \\]\nof $\\gamma$ so that $\\widetilde{\\gamma}(0)=\\widetilde{\\ast}$.\nLet $\\Sigma(F)$ be the connected component of $\\pi^{-1}(T(F))$ that contains the point $\\widetilde{\\gamma}(1)$. \nThe \\emph{homology class of $H_{1}(\\widetilde{\\mathbb D_n};\\Z)$ represented by} $F$ is then defined as the homology class represented by $\\Sigma(F)$. \nBy abuse of notation, we still denote this homology class by $F$.\nStrictly speaking, since $\\Sigma(F)$ is not compact we need to work with the homology of locally finite chains $H_{1}^{lf}(\\widetilde{\\mathbb D_n};\\Z)$ or $H_{1}(\\widetilde{\\mathbb D_n},\\widetilde{P};\\Z)$, where $\\widetilde{P}$ is the preimage of a small neighborhood of the punctures in $\\mathbb D_{n}$. Rigorous treatments are well-known and give rise to the same conclusions (see \\cite{bi3}, for example), so we do not take care of these subtle points.\n \n\nOf special importance is the following family of particularly simple forks: \nfor $i=1,\\ldots,n-1$, let $F_{i}$ be the fork whose tine is a straight arc connecting the $i$th and the $(i+1)$st punctures and whose handle is contained in the lower half of the disk $\\mathbb D_{n}$ (see Figure \\ref{fig:fork} (c)). \nThese are called {\\em standard forks}.\nStandard forks $F_{1}, \\ldots, F_{n-1}$ form a basis of $H_{1}(\\widetilde{\\mathbb D_{n}};\\Z)$. \nThe group $\\GL(H_{1}(\\widetilde{\\mathbb D_{n}};\\Z))$ can be identified with $\\GL(n-1;\\Z[q,q^{-1}])$ using the basis of standard forks. \nThis allows to get the familiar matrix description of the reduced Burau representation:\n\n\n\\[\n\\rho_n(\\sigma_{1}) = \\left(\n\\begin{array}{cc} \n-q & 0 \\\\\n1 & 1 \\\\\n\\end{array}\n\\right) \\oplus I_{n-3}, \\;\\;\\; \n\\rho_n(\\sigma_{n-1})=I_{n-3}\\oplus\n\\left(\\begin{array}{cc}\n1 & q\\\\\n0 &-q\\\\\n\\end{array}\n\\right),\n\\] \n\\[\n\\rho_n(\\sigma_{i}) = I_{i-2} \\oplus \n\\left(\n\\begin{array}{ccc} \n1 & q & 0 \\\\\n0 & -q & 0 \\\\\n0 & 1 & 1 \\\\\n\\end{array}\n\\right) \n\\oplus I_{n-i-2}, \\;\\;\\; (i=2,\\ldots,n-2)\n\\]\n\n\n\n\n\\subsection{The noodle-fork pairing}\nA {\\em noodle} is an embedded oriented arc in $\\mathbb D_n$ which begins at the base point $\\ast$ and ends at some point of the boundary $\\partial\\mathbb D_n$. \nNoodles represent relative homology classes in $H_1(\\widetilde{\\mathbb D_n}, \\partial \\widetilde{\\mathbb D_n};\\Z)$. \n\nThe \\emph{noodle-fork pairing} (in our notation, it should say fork-noodle pairing) is a homology intersection (algebraic intersection) pairing \n$$\\langle\\;,\\;\\rangle:H_1(\\widetilde{\\mathbb D_n} ;\\Z) \\times H_1(\\widetilde{\\mathbb D_n}, \\partial \\widetilde{\\mathbb D_n};\\Z) \\rightarrow \\Z[q,q^{-1}].$$\nGeometrically, it is computed in the following way (see \\cite{bi3} Section 4).\n\n\nGiven a fork $F$ and a noodle $N$, put $T(F)$ and $N$ transverse with minimal intersections.\nLet $z_1,\\ldots,z_r$ be the intersection points. Each intersection point $z_{i}$ then contributes a monomial $\\varepsilon_{i}q^{e_i}$ to $\\langle N,F\\rangle$, where $\\varepsilon_i$ is the sign of the intersection between $T(F)$ and $N$ at $z_{i}$ and $e_i$ is an integer.\nThe noodle-fork pairing is then given by\n\\[ \\langle F,N\\rangle = \\sum_{1 \\leq i \\leq r} \\varepsilon_{i}q^{e_i} \\in \\Z[q,q^{-1}]. \\]\n\nThe integer $e_i$ is computed as follows. Let $\\gamma_i$ be the loop which is the composition of three paths $A$, $B$ and $C$ in $\\mathbb D_{n}$:\n\\begin{itemize}\n\\item $A$ is a path from $\\ast$ to the branch point of~$F$ along the handle of~$F$.\n\\item $B$ is a path from the branch point of $F$ to $z_{i}$ along the tine $T(F)$.\n\\item $C$ is a path from $z_{i}$ to $\\ast$ along the noodle $N$.\n\\end{itemize}\nThen $e_{i} = \\epsilon([\\gamma_i])$: that is, $d_i$ is the sum of the winding numbers of the loop $\\gamma_i$ about the puncture points $p_1,\\ldots,p_n$.\n\nAs for forks, we define a distinguished family of noodles: for $i=1,\\ldots,n-1$, the \\emph{standard noodle} $N_i$ is the noodle which has empty intersection with the walls and ends at some boundary point between $W_{i}$ and $W_{i+1}$. \nGiven a braid $x$, the entries of its Burau matrix can be computed using the noodle-fork pairing in a fairly direct manner.\n\n\n\\begin{lemma}[Burau Matrix formula]\n \\label{lemma:Buraumatrix}\nLet $x\\in B_n$. \nThen for $1\\leqslant i,j\\leqslant n$, the entry $\\rho_n(x)_{ij}$ of its Burau matrix is given by $\\rho_n(x)_{ij}=\\langle (F_{i})x,N_{j} \\rangle$.\n\\end{lemma}\n\\begin{proof}\nBy definition, $(F_i)x = \\sum_{k=1}^{n-1} F_k \\rho_n(x)_{ik} \\in H_{1}(\\widetilde{\\mathbb D_n};\\Z)$, hence for $i,j \\in 1,\\ldots,n-1$ we have\n\\[ \\langle (F_i)x , N_j \\rangle = \\sum_{k=1}^{n-1} \\langle F_{k},N_{j} \\rangle \\rho_n(x)_{ik}\\]\nIt is directly checked that $\\langle F_{k},N_{j}\\rangle = \\delta_{kj}$ (Kronecker's delta) hence\n\\[ \\langle (F_{i})x,N_{j} \\rangle = \\rho_n(x)_{ij}. \\]\n\\end{proof}\n\n\\begin{example}\nAs an example of application of Lemma \\ref{lemma:Buraumatrix}, we can retrieve the Burau matrices associated to Artin generators $\\sigma_i$. First, we notice that for $k=1,\\ldots,n-1$, $i\\neq k-1,k,k+1$, $(F_i)\\sigma_k=F_i$, so that $\\langle (F_i)\\sigma_k,N_j\\rangle=\\delta_{i,j}$.\nFor the remaining values of $i$, Figure \\ref{fig:example} shows the images $(F_i)\\sigma_k$. \n\\begin{figure}[htbp]\n \\begin{center}\n\\includegraphics*[scale=0.5, width=100mm]{Example.eps}\n \\caption{On the left part, forks $F_{k-1}$, $F_k$ and $F_{k+1}$ and on the right part, their images under the action of the braid $\\sigma_k$; relevant noodles are depicted in dashed lines.}\n \\label{fig:example}\n \\end{center}\n\\end{figure}\n\nWith the help of Figure \\ref{fig:example} we can conclude:\n\n\\begin{tabular}{lr}\n{$\\langle (F_{k-1})\\sigma_k,N_j\\rangle=\\begin{cases} \n0 & {\\text{if}}\\ ji,\\\\\n1 & {\\text{if}}\\ j=k-1,\\\\\nq & {\\text{if}}\\ j=k.\\\\\n\\end{cases}$} & \n\n{$\\langle (F_{k+1})\\sigma_k,N_j\\rangle=\\begin{cases} \n0 & {\\text{if}}\\ jk+1,\\\\\n1 & {\\text{if}}\\ j=k,\\\\\n1 & {\\text{if}}\\ j=k+1.\\\\\n\\end{cases}$} \\\\\n\\end{tabular}\n\n$$\\langle (F_{k})\\sigma_k,N_j\\rangle=\n\\begin{cases} \n0 & {\\text{if}}\\ j\\neq k,\\\\\n-q & {\\text{if}}\\ j=k .\\\\\n\\end{cases}\n$$\nLemma \\ref{lemma:Buraumatrix} then allows to retrieve the matrices given at the end of Section \\ref{section:forks}.\n\n\n\\end{example}\n\n\\subsection{Noodle-fork pairing and wall-crossing labeling}\nWe finally review a connection between the integers $e_i$ in the computation of the noodle-fork pairing and the wall-crossing labeling. \nThis will yield the expected relation between the Burau representation and the wall-crossing labeling. \n\nLet $x\\in B_n$. First we recall how to assign wall-crossing labelings for points belonging to the image $(F_i)x$ of the standard fork $F_i$ under $x$. \nLet us consider the part of the curve diagram $D_{x}$ that is the image of $E_{i}$ (the line segment between the $i$-th and $(i+1)$-st punctures). We identify this part $(E_{i})x$ of the curve diagram with $(T(F_{i}))x$. Moreover, a part of the modified curve diagram can naturally be regarded as the handle of $(F_{i})x$, as shown in Figure~\\ref{fig:cdtofork}. This identification induces the wall crossing labeling on each connected component of $(F_{i})x- (W\\cup U)$. For a point $z \\in (F_i)x- (W\\cup U)$ we denote by $\\Wcr_x(z)$ the corresponding label. \n\n\\begin{figure}[htbp]\n \\begin{center}\n\\includegraphics*[scale=0.5, width=80mm]{cdtofork_0_1.eps}\n\\caption{Viewing a curve diagram as a union of tines of forks, and viewing initial segments of modified curve diagrams as handles.}\n\\label{fig:cdtofork}\n\\end{center}\n\\end{figure}\n\nLet $N$ be a noodle; we may assume that no intersection point in $(T(F_i))x \\cap N$ belongs to $W\\cup U$. \n\n\\begin{lemma}\n\\label{lemma:exponent}\nFix an intersection point $z\\in (T(F_i))x \\cap N$.\nLet $c(z)$ be the algebraic intersection number of $W$ and the path $C$ in the definition of the pairing $\\langle (F_i)x,N\\rangle$\n(i.e. $C$ is a path from $z$ to $\\ast$ along $N$). Let $e(z)$ be the degree of $q$ in the $z$-contribution to $\\langle (F_i)x,N\\rangle$.\nThen\n\\[ e(z) = \\Wcr_x(z) + c(z)\\]\n\\end{lemma}\n\\begin{proof}\nLet $A$ and $B$ be the paths in the definition of the pairing $\\langle (F_i)x,N\\rangle$. Then $\\Wcr(z)$ is nothing but the algebraic intersection number of $W$ and the composite path $BA$. Hence the algebraic intersection number of $W$ and the loop $\\gamma = CBA$ is $\\Wcr_x(z) + c(z)$, which is, by definition, equal to $e(z)= \\epsilon(\\gamma)$.\n\\end{proof}\n\n\\begin{corollary}\n\\label{cor:ineqn}\nFor any braid $x\\in B_n$, the following inequality holds:\n$$M(\\rho_n(x))\\leqslant \\sup \\!{}_{\\sf d}(x).$$\n\\end{corollary}\n\\begin{proof}\nFor we have, by definition and thanks to Lemma \\ref{lemma:Buraumatrix},\n$$M(\\rho_n(x)) = \\max_{i,j}\\{ M(\\langle (F_{i})x,N_{j} \\rangle) \\}.$$\nFor a standard noodle $N_j$ and a point $z\\in (T(F_{i}))x \\cap N_j$ the integer $c(z)$ in Lemma \\ref{lemma:exponent} is always $0$ because standard noodles do not intersect with walls.\nTherefore we have \n$$\\max_{i,j}\\{ M (\\langle (F_{i})x,N_{j} \\rangle) \\}\\leq \\LWcr(x)$$\nand finally, as $\\LWcr(x) = \\sup_{\\sf d} (x)$ (Theorem \\ref{theorem:wall}) we are done.\n\\end{proof}\n\n\n\n\n\\section{Braids whose Burau Matrix detects the dual Garside normal forms}\\label{section:dual}\n\nIn view of Corollary \\ref{cor:ineqn}, a natural question is to ask when the converse inequality holds. Theorem \\ref{theorem:main_dual}\nwill give a sufficient condition for the maximal degree appearing in the Burau matrix of a braid to be equal to its dual supremum. Actually we will prove more: under the same condition, it is possible to determine the dual normal form from the Burau matrix.\n\nTo state Theorem \\ref{theorem:main_dual} we first introduce the notion of simply-nestedness as a refinement of the left-weightedness condition (Proposition \\ref{prop:left-weighted}), which will allow us to get a better control on the action of a braid in dual normal form. \n\nLet $d,d'$ be two dual simple elements, expressed as products of disjoint polygons $d=P_1\\ldots P_r$ and $d'=Q_1\\ldots Q_{r'}$ respectively. \nWe say that the ordered pair $(d,d')$ is \\emph{simply-nested} if for any polygon $Q$ among $Q_{1},\\ldots, Q_{r'}$, there exists a {\\em unique} polygon $P$ among $P_1,\\ldots, P_{r}$ such that for any two vertices $i,j$ of $Q$, the polygon $P$ has two vertices $k,l$ such that \n$a_{k,l}\\vdash a_{i,j}$. \nA braid $x$ will be said to be \\emph{simply-nested} if each pair of consecutive factors in its dual normal form is simply-nested.\n\nLet $B_{n}^{\\sf sn}$ be the set of simply nested $n$-braids.\nAlthough $B_{n}^{\\sf sn}$ does not form a group, $B_{n}^{\\sf sn}$ is a regular language over the alphabet $[1,\\delta]$.\nWe also remark that $B_{n}^{\\sf sn}$ is not symmetric: $x \\in B_{n}^{\\sf sn}$ does not imply $x^{-1} \\in B_{n}^{\\sf sn}$. A simple example is the 4-braid $x=(a_{3,4})(a_{2,4})$. Although $x$ is simply-nested, $N_{\\sf d}(x^{-1}) = \\delta^{-2}(a_{1,2}a_{3,4})(a_{1,2}a_{1,4})$ which is not simply nested.\n\n\nWe now can state our second main result:\n\n\\begin{theorem}\n\\label{theorem:main_dual}\nLet $x \\in B_n$ be a simply-nested braid.\n\\begin{enumerate}\n\\item[(i)] $\\sup_{\\sf d}(x) = M(\\rho_n(x))$.\n\\item[(ii)] One can compute the dual normal form from the matrix $\\rho_{n}(x)$, so the restriction of the Burau representation on the set of simply-nested braid $B_{n}^{\\sf sn}$ is injective.\n\\end{enumerate} \n\\end{theorem}\n\nFor a braid $x$ and $i=1,\\ldots,n-1$ let $\\mathcal M_{x}(E_i)$ be the set of the arc segments of $(E_i)x$ whose wall-crossing labeling attains the maximal value $\\LWcr(x)$ (possibly empty). We say that two arc segments in the curve diagram are \\emph{parallel} if both are described by $\\wideparen{(ij)}$ for some $i,j$. \nWe consider the following property \\textbf{(C)} (\\emph{Coherence property}) for a braid $x$:\n\\begin{definition}\nLet $x\\in B_n$ and $N_{\\sf d}(x) = \\delta^{p}d_1\\cdots d_r$. Express $d_{r}$ as a product of disjoint polygons: $d_{r}=Q_{1}\\cdots Q_{b}$.\nWe say that \\emph{$x$ has the property} \\textbf{(C)} if for each $i=1,\\ldots,n-1$, any two arc segments $\\alpha$ and $\\alpha'$ in $\\mathcal {M}_{x}(E_i)$ intersecting a common polygon $Q \\in \\{Q_{1},\\ldots,Q_{b}\\}$ are parallel and have the same orientation.\n\\end{definition}\n\n\n\\begin{lemma}\\label{lemma:CondC}\nIf $x$ has the property \\textbf{(C)}, then $\\sup_{\\sf d}(x)= M(\\rho_n(x))$ holds.\n\\end{lemma}\n\n\n\\begin{proof\nLet $N_{\\sf d}(x) = \\delta^{p}d_1\\cdots d_r$. \nTake $i$ so that $\\mathcal{M}_{x}(E_i)$ is non-empty. Take the minimal number $k$ so that there exists an arc segment $\\alpha = \\wideparen{(kp)} \\in \\mathcal{M}_{x}(E_i)$ for some $p>k$. We look at the entry $\\rho_n(x)_{ik}$ in the Burau matrix of $x$, which is equal to $\\langle (F_i)x, N_{k}\\rangle$ by Lemma \\ref{lemma:Buraumatrix}. In view of Corollary \\ref{cor:ineqn} and Theorem \\ref{theorem:wall}, the desired equality will be shown provided $M_q(\\langle (F_i)x,N_{k}\\rangle)=\\LWcr(x)$. \n\nLet $\\alpha'$ be another arc segment in $\\mathcal{M}_{x}(E_i)$ which intersects the noodle $N_k$. By minimality of $k$, $\\alpha'=\\wideparen{(ku)}$ for some $u\\in (k+1,n)$. By Theorem \\ref{theorem:finalfactor},\nsome polygon $Q$ in the decomposition of $d_r$ \nhas vertices $k,p,u$; both arcs $\\alpha$ and $\\alpha'$ intersect $Q$. Hence by property \\textbf{(C)}, $\\alpha$ and $\\alpha'$ are parallel with the same orientation (notice that, in particular, $u=p$ holds). \nThis shows that all arcs in $\\mathcal M_{x}(E_i)$ intersecting the noodle $N_k$ have the same sign of intersection so $M_q (\\langle (F_i)x,N_{k}\\rangle)=\\LWcr(x)$. \n\\end{proof}\n\n\\begin{lemma}\\label{lemma:SNimplies_C}\nLet $x\\in B_n^{\\sf sn}$. Then $x$ has Property {\\rm{\\textbf{(C)}}}. \n\\end{lemma}\n\\begin{proof}\nThe proof is by induction on the number $r$ of non-$\\delta$ factors in the dual normal form of $x$. \nThe case $r=1$ is checked by direct calculation. Actually, in this case $(E_j)x$ has at most one maximal labeled arc for any $j$. \n\nSuppose $N_{\\sf d}(x)=\\delta^{p}d_1\\cdots d_r$ with $r>1$. Then $x'=\\delta^{p} d_1\\cdots d_{r-1}$ is also simply-nested and has the Property \\textbf{(C)} by induction hypothesis. Let us express\n$d_{r-1}$ and $d_{r}$ as products of disjoint polygons: $d_{r-1}=P_{1}\\cdots P_{b'}$ and $d_{r}=Q_{1}\\cdots Q_{b}$.\n\nFor $f=1,\\ldots,n-1$, suppose that $\\alpha=\\wideparen{(ij)}$ and $\\alpha'=\\wideparen{(i'j')}$ are two arcs in $\\mathcal{M}_{x}(E_f)$ that intersect a common polygon $Q \\in \\{Q_{1},\\ldots,Q_{b}\\}$. \nBy Theorem \\ref{theorem:finalfactor}, all of $i,i',j,j'$ are vertices of $Q$.\nFollowing the proof of Theorem \\ref{theorem:finalfactor} we can find arcs $\\beta=\\wideparen{(kl)},\\beta'=\\wideparen{(k'l')}$ in the diagram $D_{x'}$ with label $\\Wcr(x)-1$ ($\\beta,\\beta'\\in \\mathcal M_{x'}(E_f)$) and $a_{k,l}\\vdash a_{i,j}$ and $a_{k',l'}\\vdash a_{i',j'}$. Moreover we can choose $\\beta,\\beta'$ so that $\\alpha$ and $\\alpha'$ come from $\\beta$ and $\\beta'$ respectively under the action of $Q$ (see Figure \\ref{Fig:LW} (a)). \n\nBy simply-nestedness assumption, $k,l,k',l'$ must be vertices of a common polygon $P \\in \\{P_{1},\\ldots,P_{b'}\\}$. This implies that both $\\beta$ and $\\beta'$ intersect with the same polygon $P$, hence by Property \\textbf{(C)} for $x'$, the arc segments $\\beta$ and $\\beta'$ are parallel with the same direction. Therefore the same property holds true for $\\alpha$ and $\\alpha'$, as we wanted to show.\n\\end{proof} \n \n\\begin{remark}\nWe observe that, although it is a stronger property, simply-nestedness is fairly easy to check whereas checking Property {\\bf{(C)}} directly is often a hard task since we need to know both dual normal form and the curve diagram of braids.\n\\end{remark}\n\n\n\\begin{proof}[Proof of Theorem \\ref{theorem:main_dual}]\nLemmas \\ref{lemma:CondC} and \\ref{lemma:SNimplies_C} show part (i).\n\n\nWe explain how to compute the final factor $d_{r}$ of the dual normal form of $x$, which gives an algorithm to compute the whole dual normal form of $x$ from its Burau matrix. Let $N_{\\sf d}(x)=\\delta^{p}d_{1}\\cdots d_{r}$ and write $d_{r}$ as a product of disjoint polygons: $d_{r}=Q_{1}\\cdots Q_{b}$.\n\n\nOur strategy to determine $d_{r}$ is as follows.\nWe show how to find some $a_{i,j}$ satisfying $a_{i,j} \\preccurlyeq_{\\sf d} d_{r}$ from $\\rho_{n}(x)$. Since $d_{r}$ is written as a product of at most $(n-2)$ letters $a_{i,j}$, by iterating this procedure at most $(n-2)$ times, we eventually determine $d_{r}$.\n\nFor $i=1,\\ldots,n-1$, let $M^{c}_{i}(x)=\\max\\{\\rho_n(x)_{ji}\\: | \\: j=1,\\ldots,n-1\\}$, namely, the maximal degrees of the variable $q$ in the $i$-th \\emph{column} of the Burau matrix of $x$ (do not confuse $M_{i}(x)$ in Section \\ref{section:classical}, where we used the maximal degrees of the $i$-th \\emph{row}. First we show that $M^{c}_{i}(x)$ gives candidates of $a_{i,j}$ satisfying $a_{i,j}\\preccurlyeq_{\\sf d} d_{r}$.\n\n\\begin{claim}\n\\label{claim:1p}\nWe have $$\\min\\{i\\in \\{1,\\ldots,n\\}\\: | \\: \\exists j,a_{i,j}\\preccurlyeq_{\\sf d}d_r\\}=\n\\min\\{i\\in \\{1,\\ldots,n-1\\}\\: | \\: M^{c}_i(x)=M(\\rho_n(x))\\}.$$ \n\\end{claim}\n\\begin{proof}\nLet $i_0=\\min\\{i\\in \\{1,\\ldots,n\\}\\: | \\: \\exists j,a_{i,j}\\preccurlyeq_{\\sf d}d_r\\}$.\nLet $k>i_0$ be such that $a_{i_0,k}\\preccurlyeq d_r$. \n\nFirst, we show that $M_{i_0}^{c}(x)=M(\\rho_n(x))$. \nSince $a_{i_0,k}\\preccurlyeq d_r$ and by Theorem \\ref{theorem:finalfactor} there must exist some $p\\in\\{1,\\ldots,n\\}$, $p>i_0$, such that $D_x$ admits an arc $\\alpha=\\wideparen{(i_0p)}$ labeled $\\LWcr(x)=M(\\rho_n(x))$. Let also $Q\\in\\{Q_1,\\ldots,Q_b\\}$ having vertices $i_0,p,k$ and let $j$ be such that $\\alpha\\in\\mathcal M_x(E_j)$.\nWe observe that $\\alpha$ intersects the noodle $N_{i_0}$. We will show that $M(\\rho_n(x)_{j,i_{0}})=M(\\rho_n(x))$.\nIndeed, let $\\alpha'\\in \\mathcal M_x(E_j)$ and suppose that $\\alpha'$ intersects $N_{i_0}$. By minimality of $i_0$, $\\alpha'$ must intersect with the polygon $Q$ and by Property {\\bf{(C)}}, $\\alpha'$ is parallel to $\\alpha$ with the same orientation. Hence the $\\alpha$ and $\\alpha'$ intersect with $N_{i_0}$ with the same sign. \nTherefore $M(\\langle (F_j)x,N_{i_0}\\rangle)=M(\\rho_n(x))$ as we wanted to show. \n\nSecond, we show that for $i e(d_{r'+1}\\cdots d_{r})$,\n\\end{enumerate}\nthen $\\rho_n(x)\\neq 1$.\nMoreover the condition (ii) is always satisfied if $r' > \\frac{n-2}{n-1}r$.\n\\end{corollary}\n\\begin{proof}\nPut $E=e(d_{r'+1}\\cdots d_{r})$. Assume contrary, $\\rho_n(x)=1$.\nSince $e(d_{i})\\leqslant(n-2)$, we have \n$ 0 = e(x) \\leqslant (n-1)p + (n-2)r' + E $ so $-p \\leqslant \\frac{1}{n-1}((n-2)r'+ E)$.\nOn the other hand, by (i)\n\\[ 0=M(\\rho_n(x)) = M(\\rho_{n}(\\delta^{p}d_{1}\\cdots d_{r'}) \\rho_n(d_{r'+1} \\cdots d_{r})) \\geqslant M(\\rho_{n}(\\delta^{p}d_{1}\\cdots d_{r'}) )= p+r' \\]\nhence $r' \\leqslant -p$. Therefore $r' \\leqslant \\frac{1}{n-1}((n-2)r'+ E)$, which is equivalent to $r' \\leqslant E$. This contradicts to (ii). The last assertion follows from the inequality $E \\leqslant (n-2)(r-r')$. \n\\end{proof}\n\n\n\nWe close this section by looking at some known examples of elements in the kernel of the Burau representations $\\rho_5$ and $\\rho_6$. \n\nConsider the braids $$x=[v_2^{-1}v_1\\sigma_3v_1^{-1}v_2,\\sigma_3] \\in B_6,$$ where $v_1=\\sigma_1\\sigma_2^{-1}\\sigma_5^{-1}\\sigma_4$ and \n$v_2=\\sigma_1^{-2}\\sigma_2\\sigma_5^{2}\\sigma_4^{-1}$ and \n$$y=[w_1^{-1}\\sigma_4w_1,w_2^{-1}\\sigma_4\\sigma_3\\sigma_2\\sigma_1\\sigma_1\\sigma_2\\sigma_3\\sigma_4w_2]\\in B_5,$$ where \n$w_1=\\sigma_3^{-1}\\sigma_2\\sigma_1^{2}\\sigma_2\\sigma_4^{3}\\sigma_3\\sigma_2$ and $w_2=\\sigma_4^{-1}\\sigma_3\\sigma_2\\sigma_1^{-2}\\sigma_2\\sigma_1^{2}\\sigma_2^{2}\\sigma_1\\sigma_4^{5}$. \n\nIt is known that $\\rho_5(y)=Id$ and $\\rho_6(x)=Id$. \nThe following are dual normal forms of a conjugate $x'$ and $y'$ of $x$ and $y$, respectively: \n\n$$N_{\\sf d}(x')=\\delta_6^{-6}(a_{1,6}a_{4,5})(a_{1,6}a_{2,5})(a_{1,6}a_{4,6}a_{2,3})(a_{1,5}a_{4,5}a_{2,3})(a_{3,6}a_{4,5})(a_{1,6}a_{2,5}a_{4,5})(a_{1,6}a_{3,5})(a_{1,6}a_{5,6}a_{2,4})$$\n$$(a_{1,3}a_{5,6})(a_{2,4}a_{5,6})(a_{1,3}a_{5,6}a_{4,5})(a_{2,6}a_{4,5})(a_{1,3}),$$\n\n\n$$N_{\\sf d}(y') =\\delta_5^{-23} (a_{2,5}a_{4,5})(a_{1,5}a_{3,5})(a_{1,4}a_{3,4})(a_{2,5})(a_{1,5}a_{2,3})(a_{1,5}a_{3,4})^{2}(a_{1,3})(a_{2,5}a_{3,4})$$\n$$(a_{1,4}a_{3,4})(a_{1,2}a_{1,4})(a_{1,2}a_{3,5})(a_{1,2}a_{1,5}a_{3,4})(a_{1,5})(a_{1,2})(a_{2,3})(a_{3,4})$$\n$$(a_{2,4})(a_{1,3}a_{4,5})(a_{1,2}a_{4,5})(a_{2,3}a_{4,5})(a_{1,3}a_{4,5})(a_{1,2}a_{3,5}a_{4,5})(a_{2,5}a_{3,5})$$\n$$(a_{1,3}a_{1,4})(a_{1,2}a_{1,4})(a_{1,2}a_{1,3}a_{4,5})(a_{1,2}a_{4,5})(a_{2,3}a_{4,5})^{2}(a_{2,5})(a_{1,4}a_{2,3})$$\n$$(a_{2,5}a_{3,5})(a_{1,5}a_{3,5})(a_{1,5}a_{2,4})(a_{1,5}a_{4,5}a_{2,3})(a_{1,5}a_{3,5}a_{2,3})^{4}(a_{2,4})(a_{1,3}a_{4,5})$$\n$$(a_{1,2}a_{4,5})(a_{2,3}a_{4,5})(a_{1,3}a_{4,5})(a_{1,2}a_{4,5}a_{3,4}).$$\n\n\nSee Figure \\ref{fig:NormalForm6} for pictorial (polygon) expression of $N_{\\sf d}(x')$. One notices that $N_{\\sf d}(x')$ contains many non-simply-nested pairs. Similarly, one observes that $N_{\\sf d}(y')$ also contains a lot of non-simply-nested pairs.\nThese examples and our results on simply-nested braid suggest the Burau matrix of a braid $x$ is close to be the identity matrix only when its dual normal form contains many non-simply nested pairs.\n\n\\begin{figure}[htbp]\n \\begin{center}\n\\includegraphics*[scale=1.4]{NormalForm.eps}\n\\caption{A pictorial way to represent the dual normal form of the braid $x'\\in B_6$ (the puncture rounded by a circle is the first); the symbol $\\ast$ represents non simply-nested pairs.}\n\\label{fig:NormalForm6}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}