diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzotmh" "b/data_all_eng_slimpj/shuffled/split2/finalzzotmh" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzotmh" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction and outline}\n\nThe time dependent thermal radiative transport equations couple a transport equation with a internal matter density evolution equation. \nSimulating this dynamics is extremely time consuming and \nwe are interested in the stochastic (Monte Carlo) approaches and more precisely in the situation involving a large range of \nopacities~; in such cases the particles used in the Monte Carlo simulation will undergo a wide range of behaviors~: on one hand long-time rectilinear propagation interrupted by rare scattering events and on the other hand high intensity scattering with negligible overall displacement; but all other intermediate regimes are also present.\nThe two extreme regimes can either be simulated directly or with good quality approximations and the corresponding works have been documented in the literature (see section~\\ref{sec:motivation}). \nBut treating all regimes simultaneously\nhas been a challenge and our contribution introduces a unified method to tackle this circumstance. To this end we exploit a hidden smoothness in these models which is situated at the level of the statistics of the escape laws of a particle from a given domain. \n\nThe outline of the paper is the following~: we present in section \n\\ref{sec:motivation} some motivating comments and a presentation of the state of the art; then we describe in section \\ref{sec:method} our method based on an offline-online approach that exploits the quantiles of the escape laws from a domain. \nThe assumptions of the method are checked numerically in section \\ref{sec:test_exit_time}\nand then the method is tested on a benchmark with good results in sections~\\ref{sec:transfert_rad}. \nConcluding remarks are presented in section~\\ref{sec:conclusion}.\n\n\\section{Motivation and state of the art}\n\\label{sec:motivation}\n\n\nWe present briefly the principles of the Monte Carlo method \nused to solve the transport equation~\\eqref{eq:transport} and the problem of the \ndiffusion limit in a general setting. We then present the state of the art of the methods \nthat treat this high collisions regime.\nConsider the integro-differential transport equation of the form~:\n\\begin{equation}\n\t\\frac{1}{c} \\partial_t u(t,x,\\omega) + \\omega \\cdot \\nabla u(t,x,\\omega) + (\\sigma_a(t,x) + \\sigma_s(t,x)) u(t,x,\\omega) = \\sigma_s(t,x) \\moyAng{u}(t,x),\n\t\\label{eq:transport}\n\\end{equation}\nwith time $t \\in \\mathbb{R}^+$, position $x\\in \\mathcal{D} \\subset \\mathbb{R}^d$ ($d\\ge 1$ is the dimension), $\\omega \\subset \\mathcal{S}^{d}$ (unit sphere in $\\mathbb{R}^d$) the angle of propagation and $\\moyAng{u}(t,x) = \\frac{\\int_{\\mathcal{S}^d} u(t,x,\\omega') d \\omega'}{\\int_{\\mathcal{S}^d} 1 \\cdot d \\omega'}$\nthe angular average of $u$ on $\\mathcal{S}^d$. \n\nThe absorption opacity $\\sigma_a$ and the scattering opacity $\\sigma_s$ are (known) functions depending on the spatial discretization. To solve this equation we focus on the approaches described in~\\cite{sentis98} which interpret~\\eqref{eq:transport} as a time-evolving probability density and simulate the underlying stochastic process.\n\nWhen $\\sigma_s(t,x) \\to \\infty$ we are in the \"diffusion limit\" and the cost of the Monte Carlo is prohibitive~\\cite{densmore2005discrete} : each particle undergoes a high number of collisions with the mean time between two collisions being \n$O\\left(\\frac{1}{\\sigma_s}\\right)$. But the asymptotic analysis~\\cite{larsen1980diffusion} shows that \\eqref{eq:transport} converges towards the diffusion limit equation~:\n\\begin{equation}\n\t\\frac{1}{c} \\partial_t \\moyAng{u}(t,x) = \\nabla \\cdot \\left[ \\frac{1}{3 \\sigma_s} \\nabla \\moyAng{u}(t,x) \\right].\n\t\\label{eq:diffusion} \n\\end{equation}\n\nThe equation $\\eqref{eq:transport}$ appears in particular when solving radiative transfer equations where an isotropic scattering term is necessarily added by the \\textit{Implicit Monte Carlo} linearization method \\cite{FLECK1971313} in order to artificially represent the phenomena of absorption and re-emission.\nAnother approach to avoid artificial scattering is proposed in \\cite{brooks1989symbolic} but the problem remains unchanged when important physical scattering terms are present. In the context of radiative transfer, several methods have been proposed exploiting the limit regime~\\eqref{eq:diffusion}.\n\nThe \\textit{Random Walk} (RW) methods \\cite{fleck1984random, giorla1987random} exploit the fact that the trajectories of the particles are close to those of a Brownian motion: in an optically thick medium they replace (a part of ) the trajectory by a single diffusion step in the largest sphere contained in the mesh.\nThe \\textit{Random Walk} methods have the advantage to activate everywhere on the domain, and are easily applied to $3$-dimensional problems as well as multi-group problems. Their use in a production context remains limited by their strong dependence on mesh size (the smaller the mesh size, the smaller the sphere where the method will be applied) and the loss of precision introduced by the use of the diffusion limit for transient regimes.\n\n\n\nInitially called \\textit{Implicit Monte Carlo Diffusion}, the \\textit{Discrete Diffusion Monte Carlo} (DDMC) method~\\cite{gentile2001implicit,densmore2005discrete,densmore2006hybrid} splits the domain into two regions: one optically thick region solved by a Monte Carlo method using a diffusion equation and another part treated by the \\textit{IMC} method. The numerical simulation in the optically thick region uses a linearization similar to the \\textit{IMC} method. A new type of particle is then introduced to solve the diffusion equation. The advantage of this method is that it does not have any net flux to consider between the diffusion and transport regions (the flux is carried by the particles) and the particles can go from one region to another (by a conversion) and, more importantly, can change the cell (having different $\\sigma_a$ and $\\sigma_s$ values) with no particular treatment. The introduction of a new type of particles to treat the diffusion region allows easy\ntreatment of the interface between the transport and diffusion regions. Contrary to the \\textit{RW} methods, the efficiency of these methods is not dependent on the mesh; however their use is still restricted by the loss in precision introduced by the diffusion approximation when particles change the region.\n\n\nHybrid approaches \\cite{pomraning1979, clouet2003hybrid} solve the diffusion equation analytically in some spatial areas and use the \\textit{IMC} approach in others. Both methods are coupled by boundary conditions. The hybrid methods use an analytical resolution of the scattering equation when certain criteria are met (delimited areas or according to the frequency group). The use of these methods remains limited by the coupling between the analytical resolution of the diffusion equation and the Monte Carlo method solving the transport equation which is delicate as well as the choice of criteria (e.g. the definition of areas where the diffusion approximation can be used).\n\n\nWhen the coefficient $\\sigma_s(t,x)$ in \\eqref{eq:transport} is large, the classical Monte Carlo method uses Markov particles that undergo an important number of scattering events. The randomness of the scattering part dominates and after a certain time the state of the particle follows a probability law; in this case the \\textit{RW} approximation is justified. However, there are always intermediary regimes when the number of collisions is big enough to slow down the computation but not high enough to justify the use of the diffusion approximation.\n\nA new Monte Carlo method that is efficient regardless of the value of $\\sigma_s$ and that does not reduce the accuracy of the solution is still a challenge. Ideally the method should not be sensitive to the mesh used (i.e. robust to the change in value of $\\sigma_s$ and not limited to simple spatial domain e.g., a sphere); \nand it needs to be valid regardless of the value of $\\sigma_s$ (or that activates according to criteria independent on a choice of spatial areas such as methods of \\textit{RW} type).\n\n\nOur approach, called the \\methodname{} method, is to not use the diffusion limit approximation but to work with an approximation of the probability law of the \\textbf{exact solution} of the escape time, position and direction from the spacial cell.\n\n\n\\section{The \\methodname{} method} \\label{sec:method}\n We will consider $d=1$ in all this section and work on a segment (eventually divided in several sub-intervals). To ease notations we will also use $\\sigma$ instead of the scattering opacity $\\sigma_s$.\n\n\\subsection{Toy model illustration}\n\nWe recall here a simple example used later in the numerical tests in section~\\ref{sec:transfert_rad} and that will be useful to describe the \\methodname{} method below.\n\nConsider a $1D$ particle in the segment $[x_{min},x_{max}]$ situated at the initial time $t=0$ at position $x=x_{init}$ with angle $a\\in \\{-1,1\\}$. The total remaining simulation time is $t_{max}$; in the general simulation $t_{max}$ equals the overall time step $\\Delta t$ decremented by any previous time increments for this particle (for instance when the particle traverses several cells during the same $\\Delta t$).\n\n The exact evolution of the particle is the following: rectilinear movement in direction $a$ for a time $\\tau$ (exponential random variable of mean $1\/\\sigma$) then a collision takes place. This collision changes the angle uniformly at random to a new value $a'\\in\\{-1,1\\}$. Then the process repeats until either boundary is reached~: $x=x_{min}$ or $x=x_{max}$ or $t=t_{max}$.\n\nWe are interested precisely in this escape place (one of the extremities of the segment or of the time domain) and the escape angle. This is a random variable whose distribution will be denoted \n$\\mathcal{E}(\\sigma,\\ell,t_{max})$\nwhere $\\ell=(x_{init}-x_{min})\/ (x_{max}-x_{min})$ is the relative initial position of the particle.\n\n The probability law $\\mathcal{E}(\\sigma,\\ell,t_{max})$ has the support in \n \\begin{equation}\n\\left[\n \\Big\\{(x_{min},t), t\\in[0,t_{max}]\\Big\\} \\cup \n\\Big\\{(x,t_{max}), x\\in[x_{min},x_{max}]\\Big\\}\n\\cup \\Big\\{(x_{max},t), t\\in[0,t_{max}]\\Big\\}\\right] \\times \\{-1,1\\}.\n \\end{equation}\nNote that, although the distribution seems to be $3$ dimensional, conditional on knowing the escape side, only one dimension is essential, for instance escaping through \n$x_{min}$ \/ $x_{max}$ implies that the angle is $-1$ \/ $1$, otherwise it is random $-1$\/$1$.\nAn illustration is given in figure ~\\ref{fig:xminxmaxtmax}.\n\\begin{figure}[htb!]\n\t\\begin{center}\n\t\t\\includegraphics[width=.49\\textwidth]{exit_time_plot_v1.pdf}\t\n\t\t\\caption{An illustration of the escape dynamics of a particle starting at $x_{init}$ and undergoing collisions after $Exp(\\sigma)$ time (exponential random variable of average $1\/\\sigma$). The particle can escape through any of the domain's frontiers: either because it escapes the spatial domain (dotted trajectory) or because the time is up (dashed trajectory). The random events accumulate into a probability law \n\tdenoted\t$\\mathcal{E}(\\sigma,\\ell,t_{max})$\n\t\t\twith support on the boundaries of the time-space domain (together with a escape angle direction attribute).\n\t\t} \\label{fig:xminxmaxtmax}\n\t\\end{center}\n\\end{figure} \n\n\\subsection{The method}\n\nThe section \\ref{sec:motivation} highlighted the difficulty of dealing with the diffusion limit of the equation \\eqref{eq:transport} and the limitations of existing Monte Carlo methods. We propose a new Monte Carlo method, inspired by algorithms such as \\textit{Random Walk}, that \nworks with the probability laws $\\mathcal{E}(\\sigma,\\ell,t_{max})$ of escape from a cell and is based on vector quantization techniques~\\cite{book_quantization_measures}.\n\n\n\\begin{enumerate}\n\\item We define grids of representative values of the main parameters concerned~; for instance in 1D, we employ a grid $G_{sc}$ for $\\sigma$ values (in practice a log-uniform grid from $7.5\\times 10^{-3} cm^{-1}$ to $9.0\\times 10^{6} cm^{-1}$), a grid $G_{time}$ for simulation time values $t_{max}$ (uniform grid from $400fs$ to $40000fs$) and a grid $G_{ini}$ for relative initial position in the cell from $0\\%$ to $100\\%$ relative to left segment end. Each grid $G_{sc}$, $G_{time}$, $G_{ini}$ has $100$ points. We denote $|G|$ the size of a grid $G$.\n\n\\item An \\textbf{offline} computation is done once and for all (independent of the final simulation) in order to obtain an approximation of the joint distribution (escape time, escape point, escape direction) \n $\\mathcal{E}(\\sigma,\\ell,t_{max})$ \nas a probability distribution. For each point in \n$G_{sc} \\times G_{time} \\times G_{ini}$\n we compute and store the quantiles of the law. This approximation is valid beyond the framework of the diffusion limit, in particular it does not use any analytical form. In practice we perform $1500$ simulations for each point in $G_{sc} \\times G_{time} \\times G_{ini}$ but extract only $J$ quantiles from the whole distribution (cf. previous remarks on the fact that distribution is essentially one dimensional)\\footnote{When $\\sigma$ is large enough to ensure that the diffusion approximation is valid, one can sample this law using this diffusion approximation. In practice we use a very conservative approach by replacing, for $\\sigma$ large, several collisions with one collision provided that the diffusion approximation ensures that the probability to escape is less than $10^{-6}$. Note that this is only a way to compute faster the exact law but the \\methodname{} does not depend on this choice, any sampler of the exact escape law will do.}. \nThe quantiles are minimizers of the Huber-energy distance to the target and correspond to the optimal quantization of the measure, cf. \\cite[prop. 21]{measure_quantization_turinici22} and \\cite[prop. 3 and 4]{turinici_deep_2023}; for $J$ points the optimal quantiles chosen are $\\frac{j+1\/2}{J}$, $j=0,...,J-1$. \nThis part of simulation is highly parallelizable. The results are stored as a \n$|G_{sc}| \\times |G_{time}| \\times |G_{ini}| \\times J$ array of escape points $x$ or $t$ \ntogether with the $3$ positive numbers (summing up to $1$) indicating the probability of escape through each side; for us $J=100$, the number of points is $100^3\\times 103$ requiring $\\sim 800Mb$ of storage.\n\\item During the {\\bf online} simulation, \neach time that a particle \nof parameters $(\\sigma,\\ell,t_{max})$ \nneeds to be advanced to its next escape point, \na set of parameter values \n$\\sigma^g,\\ell^g,t_{max}^g$ from the 3D-grid \n$G_{sc} \\times G_{time} \\times G_{ini}$ is chosen (see below for details) and \n a random quantile from the stored distribution \n $\\mathcal{E}(\\sigma^g,\\ell^g,t_{max}^g)$ is selected \n and returned to the user. The particle is advanced with the corresponding space\/time increments prescribed by the escape quantile returned. \nThe grid point \n$\\sigma^{g},\\ell^g,t_{max}^g$ is chosen by identifying, for each of the parameters \n$\\sigma,\\ell,t_{max}$ \n the $2$ closest values of the grid~: \n$\\sigma \\in [\\sigma^{k_1},\\sigma^{k_1+1}]$,\n$t_{max} \\in [t_{max}^{k_2},t_{max}^{k_2+1}]$,\n$\\ell \\in [\\ell^{k_3},\\ell^{k_3+1}]$\n ; then we select one of them at random with probabilities depending on the relative distance between the actual parameters and the grid points, for instance $\\sigma^g=\\sigma^{k_1}$ with probability \n $(\\sigma^{k_1+1}-\\sigma)\/ (\\sigma^{k_1+1}-\\sigma^{k_1})$.\n \n\\end{enumerate}\t\nSuch an approach does not raise questions of validity of the diffusion limit or of the calculation of the escape time from the spheres (which resort to partial differential equations with assumptions and boundary conditions sometimes difficult to tackle cf.~\\cite{https:\/\/doi.org\/10.1002\/eng2.12109, giorlaCEAn84}).\n\nThe method is called \"quantized\" because we always sample from a pre-defined list of quantiles. In practice this dimension of quantization is not any more surprising than, e.g. space discretization of the mesh and if enough quantiles are considered the contribution to the overall error is negligible. The foundations of the method are well established \n(see~\\cite{book_quantization_measures} for general information on the mathematical objects and \\cite{measure_quantization_turinici22} more specifically tailored to our applications).\n\n\n\\section{Numerical tests}\n\n\\subsection{Toy model tests: escape time and position} \\label{sec:test_exit_time}\n\nIn order for the \\methodname{} method to work conveniently, one needs to ensure that \nthe distribution $\\mathcal{E}(\\sigma,\\ell,t_{max})$ is close to the mixing of the closest distributions $\\mathcal{E}(\\sigma^g,\\ell^g,t_{max}^g)$ on the grids. This, at its turn, depends on the \nsmoothness of the mapping $(\\sigma,\\ell,t_{max}) \\mapsto \\mathcal{E}(\\sigma,\\ell,t_{max})$ that we investigate in the following. More precisely, we plot in figure~\\ref{fig:toy_example} several histograms corresponding to different typical parameter values encountered in the numerical tests in section~\\ref{sec:transfert_rad}. As expected, the laws vary slowly with the parameters.\nFor instance, in practice we noted that a grid of values for $\\sigma$ spaced log-uniform by about $25\\%$ increase from one point to another gives very satisfactory results.\n\n\\begin{figure*}[htb!]\n\\includegraphics[width=0.49\\textwidth]{laws_tmax4000_sigma0.75_xinit0.005.pdf}\n\\includegraphics[width=0.49\\textwidth]{laws_tmax4000_sigma7.5_xinit0.005.pdf}\n\n\\includegraphics[width=0.49\\textwidth]{laws_tmax4000_sigma1.0_xinit0.005.pdf}\n\\includegraphics[width=0.49\\textwidth]{laws_tmax4000_sigma10_xinit0.005.pdf}\n\n\\includegraphics[width=0.49\\textwidth]{laws_tmax4000_sigma1.25_xinit0.005.pdf}\n\\includegraphics[width=0.49\\textwidth]{laws_tmax4000_sigma12.5_xinit0.005.pdf}\n\\caption{We plot the escape times for a particle from a time-space domain \n\t$[x_{min},x_{max}]\\times [0, t_{max}]$. The probabilities of escape are given in the title of each plot. \nWe take\t$x_{min}=0.0$, \n\t$x_{max}=0.01$, initial direction $+1$, \n\t$t_{max}=4000 fs$, speed $3.0\\times 10^{-5}$ (speed of light in fs\/cm), $x_{init}=0.005$ and change the $\\sigma$ parameter (in $cm^{-1}$).\nThe plots in the columns $1$ to $3$ correspond to the escape distributions for $\\sigma=0.75$ (first line), \n $\\sigma=1$ (second line) and $\\sigma=1.25$ (third line); the plots in columns $4$ to $6$ corresponds to \n $\\sigma=7.5$ (first line)\n $\\sigma=10$ (second line) and $\\sigma=12.5$ (third line). In these $3$ latter column the collisions are too many and the point does no significantly move i.e., it only escapes because the time is consumed.}\n\\label{fig:toy_example}\n\\end{figure*}\n\n\\subsection{Propagation of a Marshak-type wave with multi-regime physics and temperature dependent opacity}\\label{sec:transfert_rad}\n\nWe test the method on the propagation of a Marshak-type wave in an opaque medium (see \\cite{marshak1958,MCCLARREN20089711} for details) which is considered a good benchmark for difficult multi-regime computations.\nWe assume an ideal gas equation under the gray approximation.\nThe Monte Carlo method used here is based on the Fleck and Cummings \\cite{FLECK1971313} linearization.\n We use a model with two temperatures (radiative and matter)~: except mention of the contrary, the term \\textit{temperature} (noted $ T_{matter} $) will indicate the \nmatter temperature.\nThis is a 1D benchmark in rod geometry (like the $S_N$ method~\\cite{carlson_solution_1958} with $N=2$) with symmetry conditions on the top and bottom edges of the mesh.\n The values and units used are specified in the table \\ref{tab:valeursMarshak}.\nWe then solve the system of equations \\eqref{eq:IMC_rod} for $t \\in [t^n, t^{n+1}[$ where $I^+(t, x) = I(t, x, \\mu=1)$ and $I^-(t, x) = I(t, x, \\mu=1)$ and $f^n$ the Fleck factor~:\n\\begin{equation}\n\\begin{aligned}\n\\frac{1}{c} \\partial_t I^+ + \\partial_x I^+ + \\sigma f^n I^+ &= \\sigma^n f^n \\frac{a c T_{matter}^4(t^n)}{2} + \\sigma^n (1-f^n) \\frac{1}{2} (I^+ + I^-) \\\\\n\\frac{1}{c} \\partial_t I^- - \\partial_x I^- + \\sigma^n f^n I^- &= \\sigma^n f^n \\frac{a c T_{matter}^4(t^n)}{2} + \\sigma^n (1-f)^n \\frac{1}{2} (I^+ + I^-) \\\\\nC_V \\partial_t T_{matter} &= \\sigma^n f^n ( a c T_{matter}^4(t^n) - 2 \\pi (I^+ + I^-))\n\\end{aligned}\n\\label{eq:IMC_rod}\n\\end{equation}\n\n\\begin{table*}[!htb]\n\t\\centering\n\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\\hline\n\t\t$I$ & $erg . cm^{-2} . s^{-1}$ & $a$ & $ 7.56 \\times 10^{-15} \\ erg . cm^{-3} . K^{-4} $ \\\\ \\hline\n\t\t$\\Delta t$ & $4 \\times 10^{-11} \\ s$ & $d$ & $1.56 \\times 10^{23} \\ K^3 . g^{-1} . cm^2$ \\\\ \\hline\n\t\t$\\rho$ & $3 \\ g . cm^{-3}$ & $c$ & $3 \\times 10^{10} \\ cm . s^{-1}$ \\\\ \\hline\n\t\t$T_{matter}$ & $K$ & $T_{matter}(0,\\cdot)$ & $11604 \\ K$ \\\\ \\hline\n\t\t$C_V$ & $8.6177 \\times 10^{7} \\ erg.g^{-1}.K^{-1}$ & $T_{matter}(\\cdot, \\text{left border})$ & $11604000 \\ K$\t \\\\ \\hline\t \n\t\\end{tabular}\n\t\\caption{Values and units used in the numerical simulation of the propagation of a Marshak-type wave in an opaque medium. \\label{tab:valeursMarshak}}\n\\end{table*}\n\nWe analyze the wave profile at $1ns$, $5ns$ and $10ns$ using a time step of $\\Delta t = 4 \\times 10^{-11} s$.\nTo do this, we perform a run for the classical Monte Carlo method \nand a run with our method with\n$50$ cells and $N_{obj} = 200$ (target number of particles by cell); we employ the local regularization method in~\\cite{laguzet2020, LAGUZET2022111373} \nand compare the wave intensity to check for physical consistency.\n\nThe \\methodname{} is tested \nwith a temperature dependent opacity given by the formula~:~$\\sigma = \\rho \\times d \\times T_{matter}^{-3} cm^{-1}$ \\cite{LAGUZET2022111373}.\nThe value used is computed at each iteration by the Fleck linearization method.\nNote that the Fleck factor induces a scattering term also depending on the matter temperature. This case illustrates the behavior of the method in a circumstance where the scattering values belong to different regimes. The results are presented in figure~\\ref{fig:second_result_M}.\nThe comparison with reference results shows good physical agreement, independent of the collision regime~: moreover the number of events per particle is substantially reduced (by a factor $1000$, cf. right axis in the right plot of figure~\\ref{fig:second_result_M}), together with the computation time.\nMoreover, we notice that the computation time is no longer strictly proportional to the number of events as for the IMC classic method, which indicates that with this new method, the trajectography is no longer the \nlimiting phase in the computation time, but the treatment carried out between each tracking phase (emission and regulation of the particles for example) becomes important (the time increases with the number of particles remaining at the end of the iteration).\n \n\\begin{figure*}[htb!]\n\t\\includegraphics[width=0.9\\textwidth]{comparaison_M2A_mcc_qmc.pdf}\n\t\\caption{Results of the simulation described in~\\ref{sec:transfert_rad} (multi-regime physics, temperature dependent opacity). The number of events per particle for the \\methodname{} is reduced with respect to the reference while keeping the physical properties of the solution. \n {\\bf Left image~:} temperature profile for the times $1ns$, $5ns$ and $10ns$.\n{\\bf Right image dashed lines, right axis ~:} mean number of particle events per iteration for the classical Monte Carlo trajectory compared with the quantized simulation; {\\bf Right image solid lines, left axis :} execution time per iteration for the two procedures. All plots refer to the same simulations.}\n\t\\label{fig:second_result_M}\n\\end{figure*}\n\n\\section{Conclusion} \\label{sec:conclusion}\n\nWe introduce the \\methodname{} method to solve a computationally intensive multi-regime thermal radiative transport equation within an unifying framework. The method is independent on any random walk assumptions to treat the high collision regime and \nrelies on a offline computation followed by online sampling from a database. We check empirically that the smoothness assumptions underlying the method are, for the applications considered, of satisfactory quality; we next test the approach on a 1D benchmark and obtain physically coherent results while improving the computational time. This opens the perspective of future work on more complicated geometries and higher dimensional settings.\n\\begin{acknowledgments}\nL.L. and G.T. acknowledge the support from their institutions.\n\\end{acknowledgments}\n\\bibliographystyle{unsrt}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nSuperconducting detectors have been recognized as one of the most successful superconducting applications. \nGenerally, they have the advantages of high sensitivity, fast response, and high energy resolution, and have successfully been applied to detect cosmic rays \\cite{TES, MKID_1, MKID_2} and single photons \\cite{SNSPD}. \nThe three most successful examples include a transition edge sensor (TES) \\cite{TES}, a superconducting nanowire single photon detector (SNSPD) \\cite{SNSPD}, and a microwave kinetic conductance detector (MKID) \\cite{MKID_1,MKID_2}. \nTES is a kind of bolometer, in which a sharp superconducting transition edge is utilized as an ultra-sensitive thermometer \\cite{TES}. \nTherefore, the sensor temperature should be kept just at a midpoint of the superconducting transition temperature $T_{\\rm c}$.\nThe main part of the SNSPD is the meander line of the superconducting nanowire, in which a bias current is fed just below the critical current.\nA tiny fraction of the nanowire itself becomes a normal state when a Cooper pair breaks by an incident photon; thus, the photon incident event is detected by probing the superconducting-normal conducting transition.\nIn the case of the MKID, the kinetic inductance change induced by Cooper pair breaking in a superconducting resonator is detected as a change of the resonance frequency.\n\n\nSome efforts were recently made to detect a neutron beam using a superconducting detector.\nA TES with a B neutron absorption layer was proposed to be used as a neutron detector \\cite{TES_B}.\nTwo superconducting tunnel junctions (STJs) on a single crystal of Li$_2$B$_4$O$_7$ with a neutron-converter layer $^6$Li or $^{10}$B were also applied to detect neutrons \\cite{STJ_1,STJ_2}.\nWe have been developing a unique superconducting neutron detector, called the current-biased kinetic inductance detector (CBKID), aiming at a neutron imager with higher spatial and energy resolutions via the time-of-flight (TOF) method (see Section~3.2 for details) using pulsed neutron sources \\cite{MgB2_1,MgB2_2,CB-KID_1,CB-KID_2,CB-KID_3,CB-KID_4, Iizawa}.\nThe CBKID has superconducting meander microstrip lines, to which a finite DC bias current is fed, and a $^{10}$B neutron conversion layer.\nThe nuclear reaction between $^{10}$B and a neutron locally breaks the finite number of Cooper pairs in the superconducting microstrip lines (see Section~3.1 for details).\nA transient change of the Cooper pair density under the DC bias current generates voltage pulses proportional to the time derivative of the local kinetic inductance.\nTherefore, the CBKID can operate in a wide region of the superconducting phase.\nHigh-resolution imaging in two dimensions with four signal readout lines was recently achieved using a combination of CBKID and delay-line method (delay-line CBKID).\nA successful neutron imaging with a spatial resolution of 22\\,$\\mu$m \\cite{CB-KID_4} was demonstrated.\nVarious approaches other than superconducting detectors have been used to develop a neutron imaging system with high spatial and temporal resolutions.\nA gadolinium-gallium-garnet (GGG) scintillator-based neutron detector with a cooled charge-coupled device (CCD) camera has reached a spatial resolution of 11 $\\mu$m with a field of view of 6$\\times$6\\,mm$^2$ by optical magnification using cold neutrons at the reactor source \\cite{GGG_scintillator}. \nHowever, the method using the CCD is not suitable for highly energy-resolved imaging using pulsed neutron sources.\nThe highest spatial resolution of 2 $\\mu$m was recently achieved using a gadolinium-oxysulfide scintillator and a cooled complementary metal-oxide semiconductor (CMOS) camera from Andor Technology with a reactor neutron source \\cite{2um}.\nThe detector magnifies the scintillation light from the scintillator.\nOne must calculate the center of mass of the scintillation events to obtain a high spatial resolution of 2 $\\mu$m.\nAdditionally, a neutron imager with a high spatial resolution and a high energy resolution can be realized using a microchannel plate (MCP) \\cite{MCPs}.\nA $^{10}$B-doped MCP combined with a Timepix readout \\cite{Timepix} as a neutron imager was developed \\cite{A.S.Tremsin_2011, A.S.Tremsin_2015}.\nThe nuclear reaction $^{10}$B(n, $^4$He)$^7$Li, which mainly emits a 0.84\\,MeV $^7$Li ion and a 1.47\\,MeV $^4$He ion, is converted into pulsed electrons, which are amplified in the $^{10}$B-doped MCP.\nThe signal is further electronically amplified, read out by the Timepix sensor array, and signal-processed in the FPGA circuit, thereby achieving a high spatial resolution of 55\\,$\\mu$m with a high time resolution of 10\\,ns.\nA new chip, called Timepix 3, is thought to overcome the previous limitation of the count rate in modes with a high temporal resolution \\cite{Timepix3}.\\par\n\n\nIn the wavelength dependence of the neutron transmission, the characteristic sawtooth structures, which are called the Bragg edges, appear at wavelengths where the Bragg conditions are satisfied because of a transient change of the coherent scattering.\nTherefore, one can distinguish the crystal structure and crystalline quality of the samples from the Bragg edges.\nThe analysis of the Bragg edges gives unique information of the samples, and is an important technique in material sciences \\cite{Bragg, nuetron_diff_2018}.\nA pulsed neutron source, which generates neutrons with a wide energy range, is suitable for observing the Bragg edge.\nThus, a combination of the high temporal resolution of a neutron detector and TOF technique in a pulsed neutron source is promising for the Bragg edge analysis of the neutron transmission.\nThe delay-line CBKID has a potential for application in the Bragg edge method with a high spatial resolution.\n\n\nThe present work expands an active area of CBKIDs from 10\\,$\\times$\\,10\\,mm$^2$ in our preceding work\\cite{CB-KID_2} to 15 $\\times$ 15\\,mm$^2$; consequently, the spatial resolution is improved as discussed in Section \\ref{Sec.imaging}.\nWe demonstrate herein a clear imaging of various test specimens, including biological and metal samples over the whole sensor active area.\nA successful observation of a stainless-steel Bragg edge in the neutron transmission is also discussed.\n\n\\section{Current-biased kinetic inductance detector}\\label{Sec.imaging}\n\nDetailed principles of the delay-line CBKID were described in Ref.~\\cite{CB-KID_4}.\nHere we briefly discussed principles for signal generation and propagation, and imaging by using the delay-line method.\n\nThe kinetic inductance in the superconductor is inversely proportional to the Cooper pair density $n_{\\rm s}$. \nThe transient change of $n_{\\rm s}$ on the superconducting wire locally occurs at a hot spot induced by a collision of the charged particle created via the nuclear reaction between $^{10}$B and a neutron.\nWhen a DC bias current $I_{\\rm b}$ is fed into the superconducting wire, a pair of voltage pulses is generated at a hot spot within a tiny segment of the superconducting wire over the length $\\Delta l$, and each pulse propagates toward both ends of the wire with opposite polarities as electromagnetic waves.\nA voltage $V$ across the hot spot is expressed as follows:\n\\begin{equation}\n\\label{eq:V}\nV=I_{\\rm b}\\frac{{\\rm d}L}{{\\rm d}t}\\simeq I_{\\rm b}\\frac{{\\rm d}L_k}{{\\rm d}t}=I_{\\rm b}\\frac{\\rm d}{{\\rm d}t} \\left(\\frac{m_{\\rm s}\\Delta l}{n_{\\rm s}q^2_{\\rm s}S}\\right)=-\\frac{m_{\\rm s}\\Delta l I_{\\rm b}}{n_{\\rm s}^2 q_{\\rm s}^2 S}\\frac{{\\rm d}n_{\\rm s}}{{\\rm d}t}, \n\\end{equation}\nwhere $S$ is the cross-sectional area of the superconducting wire; $m_{\\rm s}$ and $q_{\\rm s}$ are the effective mass and electric charge of the Cooper pair, respectively.\nWe stress that $V$ is not only the function of $n_{\\rm s}$ but also that of ${\\rm d}n_{\\rm s}\/{\\rm d}t$.\nThis is a crucial difference with the MKID.\nBecause of ultra fast quasi-particle excitation, ${\\rm d}n_{\\rm s}\/{\\rm d}t$ can be sufficiently large, and thus $V$ becomes finite even if the superconducting wire remains in the superconducting zero-resistance state at a hot spot. \nIt is in sharp contrast with TES and SNSPD.\n\nThe delay-line CBKID can image the hot spot distribution on the detector.\nFigure~\\ref{CBKID}~(a) shows a schematic of the CBKID system.\nThe CBKID has two mutually orthogonal meander lines of the superconducting Nb nanowires on the superconducting Nb ground plane.\nTherefore, one regards the meander lines with the ground plane as the superconductor-insulator-superconductor (S-I-S) coplanar waveguides and expects a lower attenuation of the high-frequency traveling waves \\cite{Swirhart}.\nTherefore, one can observe the signals that travel through a 151\\,m-length superconducting waveguide.\nThe signal propagation velocity can be suppressed by placing the superconducting meander line closer to the ground plane \\cite{Koyama}. \nTherefore, the propagation velocities for orthogonal meander lines are different from each other. \nA more detailed discussion of the signal generation and transmission of the CBKID based on a superconducting electromagnetism has been reported in Ref. \\cite{Koyama}.\nAs mentioned above, a pair of voltage pulses with opposite polarities appears at a hot spot on the meander line and propagates as electromagnetic waves toward both ends along the Nb meander line. \n\n\nWe can identify the signal quartet originating from a single event, although several signals are simultaneously present on the meander lines, as discussed elsewhere~\\cite{CB-KID_4}.\nTherefore, the CBKID has a high multi-hit tolerance up to the temporal resolution limit, where the signals can be discriminated.\nThe neutron incident positions $X$ and $Y$ are determined as follows:\n\n\\begin{eqnarray}\n\\label{eq:X}\nX={\\rm ceil}\\left[\\frac{(t_{\\rm Ch4}-t_{\\rm Ch3})v_x}{2h}\\right] p,\n\\\\\n\\label{eq:Y}\nY={\\rm ceil}\\left[\\frac{(t_{\\rm Ch2}-t_{\\rm Ch1})v_y}{2h}\\right] p,\n\\end{eqnarray}\nwhere $h$ is the length of each segment of the meander line, $p$ is a repetition pitch for the meander line, $t_{\\rm Ch1}$, $t_{\\rm Ch2}$, $t_{\\rm Ch3}$, and $t_{\\rm Ch4}$ are the corresponding time stamps of the signals received at Ch1, Ch2, Ch3, and Ch4, which correspond to the signals propagated toward both ends of the $X$ (Ch3, Ch4) and $Y$ (Ch1, Ch2) meander lines \\cite{CB-KID_4}.\nSimilarly, the $Y$ position can also be determined; hence, we can image the positions of the mesoscopic excitations in a two-dimensional (2D) plane using a very limited number (four) of electric leads for the readout circuits. \nWe note that the pixel size is proportional to $v_{x, y}$ and inversely proportional to $h$. \nTherefore, the pixel size becomes finer by the reduction of $v_{x, y}$ and\/or the extension of $h$.\n\nThe acquired timestamp-data are processed according to the abovementioned procedures on a data-processing computer to obtain a neutron transmission image.\n\n\n\\section{Detector structure and experimental apparatus}\n\\subsection{Detector structure}\nSeven layers were deposited on a thermally oxidized Si substrate in our CBKID.\nThey were sequentially stacked from bottom to top as follows: (1) a 625-$\\mu$m thick silicon substrate, (2) a 300-nm-thick SiO$_2$ layer, (3) a 300-nm thick superconducting Nb ground plane, (4) a 350-nm thick insulating SiO$_2$ layer, (5) a superconducting 40-nm-thick Nb $Y$ meander line, (6) a 50-nm thick insulating SiO$_2$ layer, (7) a superconducting 40-nm-thick Nb $X$ meander line, (8) a 50-nm thick passivation SiO$_2$ layer, and (9) a $^{10}$B neutron capture layer.\nThe nuclear reaction $^{10}$B(n, $^4$He)$^7$Li mainly emitted a $^4$He ion of 1.47\\,MeV and a $^7$Li ion of 0.88\\,MeV. The local energy dissipation to the meander line provided by each projectile was used to create a hot spot on the Nb meander lines.\nIn this detector, the $^{10}$B layer was made by painting a mixture solution of GE7031 varnish and $^{10}$B powder with a brush.\nThis method intends to achieve sufficient thickness compared to the ion ranges, but causes the influence of inhomogeneity in the $^{10}$B density in the conversion layer because of the segregation of the GE7031 varnish upon drying.\nAll turning points of the meander lines were rounded. The width was kept constant to ensure smooth propagations without the reflection of the electromagnetic waves along the whole meander line.\nMoreover, the line width was gradually tapered from the end of a meander line to the electrode pad to prevent the signal reflection caused by the sudden change in impedance.\nThe $X$ and $Y$ meander lines of 0.9\\,$\\mu$m width and 15.1\\,mm segment length were folded 10,000 times with 0.6\\,$\\mu$m spacing.\nThe repetition period $p$ was 1.5\\,$\\mu$m, and the total length of the meander line $l$ reached 151\\,m.\nThe Nb meander line with two end electrodes was fabricated in the Clean Room for Analog-Digital Superconductivity (CRAVITY) at the National Institute of Advanced Industrial Science and Technology (AIST).\nCompared with the previously reported detector \\cite{CB-KID_4}, we extended the segment length to be 1.5 times longer, and the pitch width was reduced to 3\/4.\nThe extension of the segment length tended not only to increase in the sensor detection area, but also to improve the spatial resolution with the assistance of the segment pitch refinement.\nAlthough the ultimate pixel size of our detector may be defined by the repetition period of the meander line, the actual pixel size was an integral multiple of the repetition period because of the limitation of the temporal resolution in the readout circuit.\n\n\n\\subsection{Experimental apparatus}\nThe experimental apparatus is schematically shown in Fig.~\\ref{CBKID}~(b).\nThe DC bias currents $I^x_b$ and $I^y_b$ were applied by two constant voltage sources through the 3\\,k$\\Omega$ resistors for both meander lines.\nThe signals from Ch1, Ch2, Ch3, and Ch4 were amplified via a differential ultralow-noise amplifier (SA-430 F5 by NF Corporation), while the negative signals from Ch1 and Ch3 were inverted. A readout board (Kalliope-DC) and a 2.5 GHz sampling digital oscilloscope (Teledyne LeCroy HDO4104-MS) simultaneously received positive signals because the positive thresholds for the counting signals in the Kalliope-DC board were configured for convenience.\nThe Kalliope-DC readout circuit had a 1 ns-sampling multichannel (16\\,Ch\\,$\\times$\\,2) time-to-digital converter (TDC), which was originally developed by Kojima {\\it et al} for the muon-spin relaxation ($\\mu$SR) measurements at the J-PARC facility \\cite{Kalliope}.\nCBKID and test samples were cooled down to low temperatures below $T_{\\rm c}$ using a Gifford-McMahon (GM) cryocooler.\nThe detector temperature was monitored using a Cernox thermometer and controlled using a heater placed near the detector.\nThe neutron beam was irradiated to the detector from the silicon substrate side through the test samples placed at a distance of 0.8\\,mm from the detector and cooled down with the detector.\nFurther, neutron-irradiation experiments were performed with pulsed neutrons having the collimator ratio of $L\/D=14\\,{\\rm m}\/0.10\\,{\\rm m}=140$ at BL10 of the Material and Life Science Experimental Facility (MLF) of J-PARC \\cite{BL10}.\nThe neutron energy is proportional to square of velocity. Therefore, the measurement of the neutron flight time to travel the known distance provides the neutron energy. This is so-called TOF method. \nThe energy resolution was achieved using the TOF method through the 14-m flight path with 33\\,$\\mu$s full width at half maximum (FWHM) at 10\\,meV.\n\n\\section{Results and discussion}\n\\subsection{Signals by the neutron reaction in the CBKID}\nFigure~\\ref{Signal} shows a typical signal quartet measured using the oscilloscope.\nCh1 and Ch2 corresponded to both ends of the $Y$ meander line, whereas Ch3 and Ch4 corresponded to both ends of the $X$ meander line.\nNotably, the negative signals from Ch1 and Ch3 were inverted to positive signals using a differential amplifier.\nIn conclusion, these four different signals arose from the single-neutron reaction event at the hot spot.\nFrom Eqs. (\\ref{eq:X}) and (\\ref{eq:Y}), using the time at which these four signals were detected, the position of the hot spot created by the neutron reaction was specified as a 2D coordinate.\nThe signal widths of these signals were approximately 40\\,ns, showing a sharp reaction.\nThese signal widths and signal quartet selecting procedure, which are the characteristics of the CBKID, enabled a high-speed measurement and energy dispersive neutron imaging with the combination of the TOF technique.\nWe estimated the detection rate tolerance to be as high as a few tens of MHz for the theoretical limit because the CBKID can discriminate multi-hit events in contrast with other techniques.\nAs a matter of fact, the detection rate of 0.2\\,MHz can be read using our current system.\n\n\\subsection{Direct beam measurement and image processing}\\label{background}\nWe showed the neutron transmission image herein using the CBKID without any test samples.\nThe color scale indicates the number of events (NOE).\nThe image was obtained by summing 17.9\\,h under the condition of $T=4.0\\,$K with $I_{\\rm b}^x = I_{\\rm b}^y = $0.15\\,mA and 395\\,kW beam power.\nFigure~\\ref{DirectBeam}~(a) shows the image with an incident neutron wavelength $\\lambda$ ranging from 0.052 to 1.13\\,nm.\nThe NOEs from $10\\times10$ pixels were combined in Figs.~\\ref{DirectBeam}~(a) and (b) to obtain a high-contrast image.\n\nThe neutron conversion of $^{10}$B layer was not homogeneous enough in the present CBKID sensor, as evidenced by the irregular curves seen in Fig.~\\ref{DirectBeam}~(a).\nAdditionally, a white diagonal line from the upper left to lower right can be seen.\nWe considered this line to be caused by the signal leaks between $X$ and $Y$ meander lines.\nThe actual signal weakens if it is opposite in polarity with leak signal and they merge at a leak point.\nAssuming that the signals arising from the neutron reaction at $(n_x, n_y)$ weakens each other at a leak point $(n_x^l, n_y^l)$, the relation between $(n_x, n_y)$ and $(n_x^l, n_y^l)$ satisfies the following equation:\n\\begin{eqnarray}\n\\label{eq:line1}\n\\frac{n_x^l-n_x}{v_x}=\\frac{n_y-n_y^l}{v_y},\n\\end{eqnarray}\nwhere $v_x$ and $v_y$ are the propagation velocities for the $X$ and $Y$ meander lines, respectively.\nWe can obtain the linear function from Eq.~(\\ref{eq:line1}) as follows:\n\\begin{eqnarray}\n\\label{eq:line2}\nn_y=-\\frac{v_y}{v_x}n_x+n_y^l+\\frac{v_y}{v_x}n_x^l.\n\\end{eqnarray}\nFrom Eq.~(\\ref{eq:line2}), the diagonal line can be predicted as a linear function with a slope of $-v_y\/v_x = -5.74672\\times 10^7\/6.29011\\times 10^7 = -0.913612$.\nThe diagonal line slope was obtained as $-0.9135$ from Fig.~\\ref{DirectBeam}~(a), and was in a good agreement with our consideration.\nThe neutron imaging of the test samples was performed before the direct beam measurement and had no diagonal line, implying that the diagonal line appeared because of the aging degradation of the detector.\nWe tried to remove the diagonal line via imaging processing to proceed with the background correction (Fig.~\\ref{DirectBeam}~(b)).\n\n\\subsection{Neutron imaging of the test samples}\nFigure~\\ref{SampleImage}~(a) shows a photograph of the test objects, namely (\\#1) a spider, (\\#2) a titanium screw, (\\#3) a screw of stainless-steel, (\\#4) a screw of stainless-steel, (\\#5) a Japanese beautyberry(plant), and (\\#6) a circuit board.\nIn addition, we superimposed well-shaped $^{10}$B-dots as a neutron absorber to test the spatial resolution, as shown in Fig.~\\ref{SampleImage}~(b).\nThe test absorber comprised a 50-$\\mu$m thick stainless-steel mesh (15\\,$\\times$\\,15\\,mm$^2$ in size), wherein 100\\,$\\times$\\,100\\,$\\mu$m$^2$ square holes were arrayed in a square lattice (lattice constant: 250\\,$\\mu$m).\nEach hole was tightly filled by very fine $^{10}$B particles. The stainless-steel mesh was fabricated using the wet etching technique; hence, the corners and edges of the square hole were somewhat rounded [refer the optical photograph shown in Fig.~\\ref{SampleImage}~(b)].\nThe measurement was performed for 104.9\\,h under the conditions of a bias current $I_{\\rm b}=0.15\\,$mA, a temperature $T$=4.0\\,K, and a 304\\,kW beam power.\n\nFigure~\\ref{SampleImage}~(c) shows the neutron transmission image with incident neutron wavelength $\\lambda$ ranging from 0.052 to 1.13\\,nm after correcting for background by dividing the neutron image with the test samples by the direct beam image of Fig.~\\ref{DirectBeam}~(b).\nNotably, the NOEs from $10\\times10$ pixels were combined.\nThe plant fruit, three screws, spider, and $^{10}$B-dot pattern could be confirmed, demonstrating the capability of neutron imaging for organic and metal samples by the CBKID.\nMoreover, an internal structure can be seen inside the two stainless-steel screws. \nSuch a structure was not seen in the titanium screw.\nAdditionally, the difference between pulp and seed part of the internal structure of the berry can be seen (\\#6).\nIrregular curves still remained visible. \nAs mentioned earlier, we succeeded in imaging the test objects of interests over the 15\\,$\\times$\\,15\\,mm$^2$ size using the CBKID.\n\n\\subsection{Spatial resolution}\nWe discussed herein the spatial resolution of the CBKID using the $^{10}$B-dot pattern embedded in the test sample.\nFigures~\\ref{LineProf}~(a), (b), (c), and (d) show the typical line profiles along the (a) $X$ and (c) $Y$-directions and the corresponding differentiations along the (b) $X$ and (d) $Y$-directions with minimum pixel sizes of 3 or 4.5\\,$\\mu$m and 1.5 or 3\\,$\\mu$m for $X$ and $Y$-directions, respectively.\nThe Gaussian fitting results for the differentiations are depicted by the solid lines in Figs.~\\ref{LineProf}~(b) and (d).\nWe examined the Gaussian fitting for differentiations in all the clear dot patterns (480\\,points) appearing in Fig.~\\ref{SampleImage}~(c) and obtained the averages of the FWHM as 19.2\\,$\\mu$m and 16.2\\,$\\mu$m for the $X$ and $Y$-directions, respectively.\nThe spatial resolution in the $Y$-direction was better than that in the $X$-direction because $v_y$ was slower than $v_x$.\nThe incompleteness of the edge sharpness of the hole in the stainless-steel mesh and the incomplete filling of $^{10}$B-dot particles into the hole may partially affect the results of the spatial resolution of interest.\nAs expected, the present spatial resolutions were improved compared to those in a previous report\\cite{CB-KID_4}, in which the spatial resolutions were evaluated using a similar test pattern.\n\n\n\\subsection{Material analysis using the Bragg edge}\nThe combination of the high temporal resolution in CBKID and TOF method in a pulse neutron source allowed us to demonstrate wavelength (energy) selective neutron imaging.\nFigure~\\ref{BraggEdge}~(a) shows the neutron transmission of the stainless-steel screw (sample \\#3). \nNotably, the metal mesh of the stainless-steel was not attached during the direct beam measurements; however, it was installed during neutron imaging measurements of the test samples.\nThe 111 stainless-steel Bragg edge was clearly observed, as shown in Fig.~\\ref{BraggEdge}~(a).\nA finite, but distinct 200 Bragg edge also arose.\nFigure~\\ref{BraggEdge}~(b) shows the image ratio of the neutron transmission image with a wavelength shorter than 111 Bragg edge (0.390(1)\\,nm$<\\lambda<$0.401(1)\\,nm) and that with a wavelength longer than 111 Bragg edge (0.423(1)\\,nm$<\\lambda<$0.434(2)\\,nm). \nAs shown in Fig.~\\ref{BraggEdge}~(b), only stainless-steel screws were clearly observed. The other samples disappeared by division.\n\n\\section {Summary}\nIn this study, we demonstrated higher spatial resolution neutron imaging compare with the previous report \\cite{CB-KID_4} and Bragg edge analysis by delay-line CBKID systems.\nAccordingly, we succeeded in fabricating 0.9-$\\mu$m width strip lines without any disconnection even in a 151\\,m length.\nThis improvement of the CBKIDs brought a high spatial resolution down to 16.2\\,$\\mu$m.\nTest samples with various shapes and materials were clearly observed over the whole sensor area of 15\\,$\\times$\\,15\\,mm$^2$.\nIn addition, our detector has a temporal resolution in combination with the Kalliope-DC readout circuit with 1 ns resolution TDC for high-speed data acquisition.\nBy combining with the TOF method, the delay-line CBKID became capable of wavelength (energy)-selective neutron imaging and Bragg edge analysis, as demonstrated for stainless-steel.\nA further improvement of the fabrication method of the homogeneous $^{10}$B-conversion layer is required. Nonetheless, the CBKID has potential as a unique neutron imager.\n\n\\ack\nThis work is partially supported by a Grant-in-Aid for Scientfic Research (Grant Nos. JP16H02450, JP19K03751) and from JSPS.\nThe neutron-irradiation experiments at the Materials and Life Science Experimental Facility (MLF) of the J-PARC are conducted under the support of MLF programs (Proposals Nos. 2015A0129, 2016B0112, 2017A0011, 2017B0014, 2018A0109, 2015P0301, 2016P0301, 2018P0201).\nDevelopment of the Kalliope TDC and readout electronics\/software is conducted under the collaboration of KEK Open Source Consortium of Instrumentation (Open-IT).\n\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\nVision-based deep reinforcement learning has recently been applied to robotic manipulation tasks with promising success (\\cite{Quillen, Kalashnikov, haarnoja2, ebert2018, Agrawal2016, Schwab}). Despite successes in terms of task performance, reinforcement learning is not an established solution method in robotics, mainly because of lengthy training times (e.g., four months with seven robotic arms in \\cite{Kalashnikov}). We argue in this work that reinforcement learning can be made much faster, and therefore more practical in the context of robotics, if additional elements of human physiology and cognition are incorporated: namely the abilities associated with active, goal-directed perception.\n\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{images\/main.pdf}\n \\caption{Our active perception setup, showing the interaction between two manipulators (A, E). The camera manipulator (A) is used to shift a wrist-attached camera frame (B) about a fixation point (C) while maintaining the line of sight (D) aligned with the point. The gripper manipulator (E) is equipped with a 6-DoF action space. The original image observation (G) is sampled with a log-polar like transform to obtain (H). Note that the log-polar sampling reduces the image size by a factor of four (256x256 to 64x64) without sacrificing the quality of the central region.}\n\\end{figure}\n\nWe focus in particular on two related strategies used by the human visual system \\cite{Gegenfurtner2016}.\nFirst, the human retina provides a space-variant sampling of the visual field such that the density of photoreceptors is highest in the central region (fovea) and declines towards the periphery. This arrangement allows humans to have high-resolution, sharp vision in a small central region while maintaining a wider field-of-view. Second, humans (and primates in general) possess a sophisticated repertoire of eye and head movements \\cite{Liversedge2011} that align the fovea with different visual targets in the environment (a process termed `foveation`). This ability is essential for skillful manipulation of objects in the world: under natural conditions, humans will foveate an object before manipulating it \\cite{Johansson2001}, and performance declines for actions directed at objects in the periphery \\cite{Prado2005}. \n\nThese properties of the primate visual system have not gone unnoticed in the developmental robotics literature. Humanoid prototypes are often endowed with viewpoint control mechanisms (\\cite{Orabona, Colombo1996, Metta2000, Falotico2009}). The retina-like, space-variant visual sampling is often approximated using the log-polar transform, which has been applied to a diverse range of visual tasks (see \\cite{Traver2010} for a review). Space-variant sampling, in conjunction with an active perception system, allows a robot to perceive high-resolution information about an object (e.g., shape and orientation) and still maintain enough contextual information (e.g., location of object and its surroundings) to produce appropriate goal-directed actions. We mimic these two properties in our model. First, in addition to the grasping policy, we learn an additional `fixation' policy that controls a second manipulator (Figure 1A, B) to look at different objects in space. Second, images observed by our model are sampled using a log-polar like transform (Figure 1G, H), disproportionately representing the central region.\n\nActive perception provides two benefits in our model: an attention mechanism (often termed `hard` attention in deep learning literature) and an implicit way to define goals for downstream policies (manipulate the big central object in view). A third way we exploit viewpoint changes is for multiple-view self-supervised representation learning. The ability to observe different views of an object or a scene has been used in prior work (\\cite{Sermanet2017a, Eslami2018, Dwibedi2018, Yan2017a}) to learn low-dimensional state representations without human annotation. Efficient encoding of object and scene properties from high-dimensional images is essential for vision-based manipulation; we utilize Generative Query Networks \\cite{Eslami2018} for this purpose. While prior work assumed multiple views were available to the system through unspecified or external mechanisms, here we use a second manipulator to change viewpoints and to parameterize camera pose with its proprioceptive input.\n\nWe apply our active perception and representation (APR) model to the benchmark simulated grasping task published in \\cite{Quillen}. We show that our agent can a) identify and focus on task-relevant objects, b) represent objects and scenes from raw visual data, and c) learn a 6-DoF grasping policy from sparse rewards. In both the 4-DoF and 6-DoF settings, APR achieves competitive performance (85\\% success rate) on test objects in under 70,000 grasp attempts, providing a significant increase in sample-efficiency over algorithms that do not use active perception or representation learning \\cite{Quillen}. Our key contributions are:\n\\begin{itemize}\n\\item a biologically inspired model for visual perception applied to robotic manipulation\n\\item a simple approach for joint learning of eye and hand control policies from sparse rewards\n\\item a method for sample-efficient learning of 6-DoF, viewpoint-invariant grasping policies\n\\end{itemize}\n\n\\begin{figure*} [t!]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{images\/model.pdf}\n \\caption{The APR Model. Visual (A) and proprioceptive (B) input from one view are encoded by the multimodal encoder to obtain the representation $r1$. The representation $r2$ is similarly obtained by encoding the visual (C) and proprioceptive input (D) from a second view. $r1$ and $r2$ are added to obtain the combined scene representation $r$. The action $a$, state-value $v$, and action-value function $q$ are computed for both the grasp policy (E) and the fixation policy (G). The GQN generator $g$ predicts the image from a query viewpoint, which is compared to the ground truth image from that view (F). Yellow boxes represent fully connected layers. Pink boxes represent convolutional blocks.}\n \\label{figure:model}\n\\end{figure*}\n\\section{Related Work}\n\n\\subsection{Deep RL for Vision-Based Robotic Manipulation} \n\nOur task is adapted from the simulated setup used in \\cite{Quillen} and \\cite{Kalashnikov}. \\cite{Kalashnikov} showed that intelligent manipulation behaviors can emerge through large-scale Q-learning in simulation and on real world robots. The robots were only given RGB inputs from an uncalibrated camera along with proprioceptive inputs. A comparative study of several Q-learning algorithms in simulation was performed in \\cite{Quillen} using the same task setup. Achieving success rate over 80\\% required over 100K grasp attempts. Performance of 85\\% or over is reported with 1M grasp attempts. Furthermore, \\cite{Kalashnikov} and \\cite{Quillen} restricted the action space to 4-DoF (top-down gripper orientations). We remove such restrictions, allowing the gripper to control all 6-DoF as this is important for general object manipulation. \n\nReinforcement learning with high-dimensional inputs and sparse rewards is data intensive (\\cite{Mnihb, Kaiser2019}), posing a problem for real world robots where collecting large amounts of data is costly. Goal-conditioned policies have been used to mitigate the sparse reward problem in previous work (\\cite{Andrychowicz, Nair2018}). In addition to optimizing the sparse rewards available from the environments, policies are also optimized to reach different goals (or states), providing a dense learning signal. We adopt a similar approach by using the 3D points produced by a fixation policy as reaching targets for the grasping policy. This ensures that the grasping policy always has a dense reward signal. We use the Soft Actor-Critic algorithm \\cite{Haarnoja2018} for policy learning, which was shown to improve both sample-efficiency and performance on real world vision-based robotic tasks \\cite{haarnoja2}. \n\n\\subsection{Multiple View Object and Scene Representation Learning}\n\nClassical computer vision algorithms infer geometric structure from multiple RGB or RGBD images. For example, structure from motion \\cite{Ozyesil2017} algorithms use multiple views of a scene across time to produce an \\textit{explicit} representation of it in the form of voxels or point sets. Multiple, RGBD images across space can also be integrated to produce such explicit representations \\cite{Zollhofer2018}. The latter approach is often used to obtain a 3D scene representation in grasping tasks (\\cite{Zeng2017a, Zeng2017}). In contrast to these methods, neural-based algorithms learn \\textit{implicit} representations of a scene. This is typically structured as a self-supervised learning task, where the neural network is given observations from some viewpoints and is tasked with predicting observations from unseen viewpoints. The predictions can take the form of RGB images, foreground masks, depth maps, or voxels (\\cite{Rezende2016, Gadelha, Tulsiani2017, Yan2016, Wu2016b, Eslami2018}). The essential idea is to infer low-dimensional representations by exploiting the relations between the 3D world and its projections onto 2D images. A related approach is described in \\cite{Florence} where the network learns object descriptors using a pixelwise contrastive loss. However, data collection required a complex pre-processing procedure (including a 3D reconstruction) in order to train the network in the first place. Instead of predicting observations from a different views, Time Contrastive Networks (TCNs) \\cite {Sermanet2017a} use a metric learning loss to embed different viewpoints closer to each other than to their temporal neighbors, learning a low-dimensional image embedding in the process.\n\nMultiple view representation learning has proven useful for robotic manipulation. TCNs \\cite{Sermanet2017a} enabled reinforcement learning of manipulation tasks and imitation learning from humans. Perspective transformer networks \\cite{Yan2016} were applied to a 6-DoF grasping task in \\cite{Yan2017a}, showing improvements over a baseline network. \\cite{Florence} used object descriptors to manipulate similar objects in specific ways. GQNs \\cite{Eslami2018} were shown to improve data-efficiency for RL on a simple reaching task. In this work we chose to use GQNs for several reasons: a) they require minimal assumptions, namely, the availability of RGB images only and b) they can handle unstructured scenes, representing both multiple objects and background, contextual information. We adapted GQNs to our framework in three ways. First, viewpoints are not arbitrarily distributed across the scene, rather they maintain the line of sight directed at the 3D point chosen by the fixation policy. Second, we apply the log-polar like transform to all the images, such that the central region of the image is disproportionately represented. These two properties allow the representation to be largely focused on the central object, with contextual information receiving less attention according to its distance from the image center. Third, instead of learning the representation prior to the RL task as done in \\cite{Eslami}, we structure the representation learning as an auxiliary task that it is jointly trained along with the RL policies. This approach has been previously used in \\cite{Jaderberg2016a} for example, resulting in 10x better data-efficiency on Atari games. Thus APR jointly optimizes two RL losses and a representation loss from a single stream of experience. \n\n\\subsection{Visual Attention Architectures}\n\nAttention mechanisms are found in two forms in deep learning literature \\cite{Xu2015a}. ``Soft'' attention is usually applied as a weighting on the input, such that more relevant parts receive heavier weighting. ``Hard'' attention can be viewed as a specific form of soft attention, where only a subset of the attention weights are non-zero. When applied to images, this usually takes the form of an image crop. Hard attention architectures are not the norm, but they have been used in several prior works, where a recurrent network is often used to iteratively attend to (or ``glimpse'') different parts of an image. In \\cite{Eslami}, this architecture was used for scene decomposition and understanding using variational inference. In \\cite{Gregor2015}, it was used to generate parts of an image one at a time. In \\cite{Mnih}, it was applied to image classification tasks and dynamic visual control for object tracking. More recently in \\cite{Elsayed}, hard attention models have been significantly improved to perform image classification on ImageNet. Our work can be seen as an extension of these architectures from 2D to 3D. Instead of a 2D crop, we have a 3D shift in position and orientation of the camera that changes the viewpoint. We found a single glimpse was sufficient to reorient the camera so we did not use a recurrent network for our fixation policy. \n\n\\section{Method}\n\n\\subsection{Overview}\n\nWe based our task on the published grasping environment \\cite{Quillen}. A robotic arm with an antipodal gripper must grasp procedurally generated objects from a tray (Figure 1). We modify the environment in two ways: a) the end-effector is allowed to move in full 6-DoF (as opposed to 4-DoF), and b) a second manipulator (the head) is added with a camera frame fixed onto its wrist. This second manipulator is used to change the viewpoint of the attached camera. The agent therefore is equipped with two action spaces: a viewpoint control action space and a grasp action space. Since the camera acts as the end-effector on the head, its position and orientation in space are specified by the joint configuration of that manipulator: $v = (j_1, j_2, j_3, j_4, j_5, j_6)$. The viewpoint action space is three-dimensional, defining the point of fixation $(x, y, z)$ in 3D space. Given a point of fixation, we sample a viewpoint from a sphere centered on it. The yaw, pitch and distance of the camera relative to the fixation point are allowed to vary randomly within a fixed range. We then use inverse kinematics to move the head to the desired camera pose. Finally, the second action space is 6-dimensional $(dx, dy, dz, da, db, dc)$, indicating the desired change in gripper position and orientation (Euler angles) at the next timestep. \n\nEpisodes are structured as follows. The agent is presented with an initial view (fixation point at the center of the bin) and then executes a glimpse by moving its head to fixate a different location in space. This forms a single-step episode from the point of view of the glimpse policy (which reduces the glimpse task to the contextual bandits formulation). The fixation location is taken as the reaching target; this defines the auxiliary reward for the grasping policy. The grasping policy is then executed for a fixed number of timesteps (maximum 15) or until a grasp is initiated (when the tool tip drops below a certain height). This defines an episode from the point of view of the grasping policy. The agent receives a final sparse reward if an object is lifted and the tool position at grasp initiation was within 10cm of the fixation target. The latter condition encourages the agent to look more precisely at objects, as it is only rewarded for grasping objects it was looking at. The objective of the task is to maximize the sparse grasp success reward. The grasping policy is optimized using the sparse grasp reward and the auxiliary reach reward, and the fixation policy is optimized using the grasp reward only. \n\nNote that all views sampled during the grasping episode are aligned with the fixation point. In this manner, the grasping episode is implicitly conditioned by the line of sight. Essentially, this encourages the robot to achieve a form of eye-hand coordination where reaching a point in space is learnt as a reusable skill. The manipulation task is thus decomposed into two problems: localize and fixate a relevant object, then reach for and manipulate said object. \n\n\\subsection{Model}\n\nAn overview of APR is given in Figure \\ref{figure:model}. Multimodal input from one view, consisting of the view parameterization (six joint angles of the head $v = (j_1, j_2, j_3, j_4, j_5, j_6)$), image ($64 \\times 64 \\times 3$) and gripper pose $g = (x, y, z, \\sin(a), \\cos(a), \\sin(b), \\cos(b), \\sin(c), \\cos(c))$, is encoded into a scene representation, $r1$, using a seven layer convolutional network with skip connections. $(a, b, c)$ are the Euler angles defining the orientation of the gripper. The scene representation $r1$ is of size $16 \\times 16 \\times 256$. The proprioceptive input vectors $g$ and $v$ are given spatial extent and tiled across the spatial dimension ($16 \\times 16$) before being concatenated to an intermediate layer of the encoder. The input from a second view (Figure 2C, D) is similarly used to obtain $r2$, which is then summed to obtain $r$, the combined scene representation.\n\nThe fixation policy and grasping policies operate on top of $r$. Their related outputs (action $a$, state-value $v$ and action-value functions $q$) are each given by a convolutional block followed by a fully-connected layer. The convolutional blocks each consist of three layers of $3 \\times 3$ kernels with number of channels 128, 64, and 32 respectively. The generator is a conditional, autoregressive latent variable model that uses a convolutional LSTM layer. Conditioned on the representation $r$, it performs 12 generation steps to produce a probability distribution over the query image. The encoder and generator architecture are unmodified from the original work, for complete network details we refer the reader to \\cite{Eslami2018}.\n\n\\begin{figure}[t!] \n \\centering\n \\includegraphics[width=0.9\\columnwidth]{images\/episode.pdf}\n \\caption{Comparing visual inputs of the active and passive models during a five step episode. Top: images sampled from different views centered on the target object. Bottom: images from one of the static cameras of the passive model. An interesting feature of the active input is that the gripper appears larger as it approaches the target object, providing an additional learning cue.}\n\\end{figure}\n\nThe log-polar like sampling we use is defined as follows. Let $(u, v) \\in [-1, 1] \\times [-1, 1]$ be a coordinate in a regularly spaced image sampling grid. We warp $(u, v)$ to obtain the log-polar sampling coordinate $(u', v')$ using the following equation: \n\n$$\n(u', v') = \\log(\\sqrt{u^2 + v^2} + 1) \\cdot (u, v)\n$$\n\n\\subsection{Learning}\n\nWe learn both policies using the Soft Actor-Critic algorithm \\cite{Haarnoja2018}, which optimizes the maximum-entropy RL objective. For detailed derivations of the loss functions for policy and value learning we refer the reader to \\cite{Haarnoja2018}. In conjunction with the policy learning, the multimodal encoder and generator are trained using the generative loss (evidence lower bound) of GQN. This loss consists of KL divergence terms and a reconstruction error term obtained from the variational approximation \\cite{Eslami2018}. Note that the encoder does not accumulate gradients from the reinforcement learning losses and is only trained with the generative loss. To obtain multiple views for training, we sample three viewpoints centered on the given fixation point at every timestep during a grasping episode. Two of those are randomly sampled and used as context views to obtain $r$, the third acts as the ground truth for prediction. We did not perform any hyperparameter tuning for the RL or GQN losses and used the same settings found in \\cite{Haarnoja2018} and \\cite{Eslami2018}. \n\n\\section{Experiments}\n\nWe perform three experiments that examine the performance of active vs passive models (Section A), of active models that choose their own targets (Section B), and the benefits of log-polar images and representation learning for active models (Section C).\n\nIn our experiments, training occurs with a maximum of 5 objects in the bin. A typical run takes approximately 26 hours (on a single machine), with learning saturating before 70K grasps. Every episode, the objects are sampled from a set of 900 possible training objects. For evaluation, we use episodes with exactly 5 objects present in the bin. Evaluation objects are sampled from a different set of 100 objects, following the protocol in \\cite{Quillen}. \n\n\\subsection{Active vs Passive Perception}\n\nWe evaluate active looking vs a passive (fixed) gaze at targeted grasping. This setting is designed to test goal-oriented manipulation, rather than grasping any object arbitrarily. For this experiment, we do not learn a fixation policy, but instead use goals, or target objects, that are selected by the environment. The policies are only rewarded for picking up the randomly chosen target object in the bin. The active model and the passive model receive the same inputs (visual and proprioceptive) along with a foreground object mask that indicates the target object. The only difference between the two models is the nature of the visual input: the active model observes log-polar images that are centered on the target object, while the passive model observes images of the bin from three static cameras (Figure 3). The static cameras are placed such that each can clearly view the contents of the bin and the gripper. This mimics a typical setup where cameras are positioned to view the robot workspace, with no special attention to any particular object or location in space. Using an instance mask to define the target object was previously done in \\cite{Fangb} for example. Note that the generator is trained to also reconstruct the mask in this case, forcing the representation $r$ to preserve the target information. \n\nTable 1 (Active-Target, Passive-Target) shows the evaluation performance of the active vs passive model with environment selected targets. We observe that the active model achieves 8\\% better performance. Figure 4 (yellow vs blue curves) shows that the active model is more sample-efficient as well. \n\nThe performance of the passive model (at 76\\%) is in line with the experiments in \\cite{Quillen} on targeted grasping. All algorithms tested did not surpass an 80\\% success rate, even with 1M grasp attempts. The experiment above suggests that, had the robot been able to observe the environment in a more ``human-like'' manner, targeted grasping performance could approach performance on arbitrary object grasping.\n\n\\begin{table}[t]\n\\caption{Evaluation Performance}\n\\label{table_example}\n\\begin{center}\n\\begin{tabular}{|c||c|}\n\\hline\nModel & Grasp Success Rate\\\\\n\\hline\nActive-Target & 84\\% \\\\\n\\hline\nPassive-Target & 76\\% \\\\\n\\hline\nActive-Target w\/o Log-Polar & 79\\% \\\\\n\\hline\nActive-Learned-6D (after 70K grasps)& 85\\% \\\\\n\\hline\nActive-Learned-4D (after 25K grasps) & 85\\% \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{figure}[t!] \n \\centering\n \\includegraphics[width=0.9\\columnwidth]{images\/line-plot.pdf}\n \\caption{Learning curves for our experiments. Active-Target, Passive-Target: active and passive models with environment selected targets. Active-Learned: full APR model with fixation policy. Active-w\/o representation, Active-Target w\/o log-polar: APR versions without representation learning or log-polar sampling, respectively. Shaded regions indicate the standard deviation over two independent runs with different random seeds.}\n\\end{figure} \n\n\\subsection{Learning Where to Look}\n\nThe experiment above shows that active perception outperforms passive models in goal-oriented grasping. But can a robot learn where to look? Here we use the full version of the model with the learned fixation policy. Grasp rewards are given for picking up any object, as long as the object was within 10cm of the fixation point. This ensures that the model is only rewarded for goal-oriented behavior. In this setting, the model learns faster in the initial stages than in the targeted grasping case (Figure 4) and is slightly better in final performance (Table 1). This does not necessarily imply that this model is better at grasping, it could be due to the model choosing easier grasping targets. The latter may nevertheless be a virtue depending on the context (e.g., a bin-emptying application). This result indicates that active perception policies can be learnt in conjunction with manipulation policies. \n\nNote that the full version of APR does not use additional information from the simulator beyond the visual and proprioceptive inputs. In contrast to Section A, the fixation point (and therefore the auxiliary reaching reward), is entirely self-generated. This makes APR directly comparable to the vanilla deep Q-learning learning algorithms studied in \\cite{Quillen}. With 100K grasp attempts, the algorithms in \\cite{Quillen} achieve approximately 80\\% success rate. We tested the model in the 4-DoF case, where it achieves an 85\\% success rate with 25K grasps (Table 1). Therefore, APR outperforms these previous baselines with four times fewer samples. [33] reported improved success rates of 89-91\\% with vanilla deep Q-learning after 1M grasps (though it was not reported what levels of performance were attained between 100K and 1M grasps). On the more challenging 6-DoF version, we achieve an 85\\% success rate with 70K grasps, but we have not yet extended the simulations to 1M grasps to allow a direct comparison with these results. \n\n\\subsection{Ablations} \n\nTo examine effects of the log-polar image sampling and the representation learning, we ran two ablation experiments in the environment selected target setting (as in Section A). Figure 4 (red curve) shows that APR without representation learning achieves negligible improvement within the given amount of environment interaction. (Without the representation learning loss, we allow the the RL loss gradients to backpropagate to the encoder, otherwise it would not receive any gradient at all). The pink curve shows APR without log-polar images. The absence of the space-variant sampling impacts both the speed of learning and final performance (Table 1).\n\n\\section{Discussion and Future Work}\n\nWe presented an active perception model that learns where to look and how to act using a single reward function. We showed that looking directly at the target of manipulation enhances performance compared to statically viewing a scene (Section 4A), and that our model is competitive with prior work while being significantly more data-efficient (Section 4B). We applied the model to a 6-DoF grasping task in simulation, which requires appropriate reaching and object maneuvering behaviors. This is a more challenging scenario as the state space is much larger than the 4-DoF state space that has typically been used in prior work (\\cite{Pinto2015, Quillen, Kalashnikov}). 6-DoF control is necessary for more general object manipulation beyond top-down grasping. Figure 5 shows interesting cases where the policy adaptively orients the gripper according to scene and object geometry. \n\nThe biggest improvement over vanilla model-free RL algorithms came from representation learning, which benefited both passive and active models. Figure 6 shows sample generations from query views along with ground truth images from a single run of the active model. Increasingly sharp renderings (a reflection of increasingly accurate scene representation) correlated with improving performance as the training run progressed. While the generated images retained a degree of blurriness, the central object received a larger degree of representational capacity simply by virtue of its disproportionate size in the image. This is analogous to the phenomenon of ``cortical magnification'' observed in visual cortex, where stimuli in the central region of the visual field are processed by a larger number of neurons compared to stimuli in the periphery \\cite{DANIEL1961}. We suspect that such a representation learning approach -- one that appropriately captures the context, the end-effector, and the target of manipulation -- is useful for a broad range of robotic tasks. \n\n\\begin{figure}[t] \n \\centering\n \\includegraphics[width=0.9\\columnwidth]{images\/nice.pdf}\n \\caption{Examples of pre-grasp orienting behaviors due to the policy's 6-DoF action space.}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{images\/repr.pdf}\n \\caption{Scene renderings from query views at different snapshots during active model training. At later stages, the gripper, central object, and bin are well-represented. Surrounding objects occupy fewer pixels in the image, so they are not represented in as much detail.}\n\\end{figure} \n\nLooking ahead to testing the APR model in a physical environment, we see additional challenges. Realistic images may be more difficult for the generative model of GQN, which could hamper the representation learning. Exploration in a 6-DoF action space is more time-consuming and potentially more collision-prone than a top-down, 4-DoF action space. Some mechanism for force sensing or collision avoidance might be needed to prevent the gripper from colliding with objects or the bin. Active camera control introduces another complicating factor. It requires a physical mechanism to change viewpoints and a way of controlling it. We used a second 6-DoF manipulator in our simulator, but other simpler motion mechanisms are possible. Learning where to look with RL as we did in this work may not be necessary. It might be possible to orient the camera based on 3D location estimates of relevant targets.\n \nLooking at relevant targets in space and reaching for them are general skills that serve multiple tasks. We believe an APR-like model can therefore be be applied to a wide range of manipulation behaviors, mimicking how humans operate in the world. Whereas we structured how the fixation and grasping policies interact (``look before you grasp''), an interesting extension is where both policies can operate dynamically during an episode. For example, humans use gaze-shifts to mark key positions during extended manipulation sequences \\cite{Johansson2001}. In the same manner that our fixation policy implicitly defines a goal, humans use sequences of gaze shifts to indicate subgoals and monitor task completion \\cite{Johansson2001}. The emergence of sophisticated eye-hand coordination for object manipulation would be exciting to see.\n\n\n\n\\section{Conclusion}\n\n\\cite{Hassabis2017} argues that neuroscience (and biology in general) still contain important clues for tackling AI problems. We believe the case is even stronger for AI in robotics, where both the sensory and nervous systems of animals can provide a useful guide towards intelligent robotic agents. We mimicked two central features of the human visual system in our APR model: the space-variant sampling property of the retina, and the ability to actively perceive the world from different views. We showed that these two properties can complement and improve state-of-the-art reinforcement learning algorithms and generative models to learn representations of the world and accomplish challenging manipulation tasks efficiently. Our work is a step towards robotic agents that bridge the gap between perception and action using reinforcement learning.\n\n %\n %\n %\n %\n %\n\n\n\n\n\n\n\n\n\\printbibliography\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet $\\mathcal W_n$ be a complete undirected graph of $n$ vertices where each edge is assigned an \nindependent exponential weight with mean $n$; this is referred to as the \\emph{stochastic mean-\nfield} ($\\mbox{SMF}_n$) model. For a (self-avoiding) \\emph{path} $\\pi = (v_0, v_1,\\ldots, v_m)$, \ndefine its length $len(\\pi)$ and average weight $A(\\pi)$ by\n\\begin{align*}\n len(\\pi) = m \\,, \\mbox{ and }A(\\pi) = \\tfrac{1}{m}\\mbox{$\\sum_{i = 1}^m$} W_{(v_{i-1},v_i)}\\,,\n\\end{align*}\nwhere $W_{(u, v)}$ is the weight of the edge $(u, v)$. For $\\lambda >0$, let $L(n,\\lambda)$ be \nthe length of the longest path with average weight below $\\lambda$, i.e., \n\n$$L(n,\\lambda) = \\max\\{len(\\pi): A(\\pi) \\leq \\lambda, \\mbox{ }\\pi\\mbox{ is a path in \n$\\mbox{SMF}_n$ model}\\}\\,.$$\n\nIn a non-rigorous paper of Aldous \\cite{Aldous05}, it was predicted that $L(n, \\lambda) \\asymp n \n(\\lambda - \\mathrm{e}^{-1})^\\beta$ with $\\beta = 3$ $\\lambda \\downarrow \\mathrm{e}^{-1}$. Our main \nresult is the following theorem, which corrects Aldous' prediction.\n\\begin{theorem}\n\\label{Prop}\nLet $\\lambda = 1\/\\mathrm{e} + \\eta$ where $\\eta > 0$. Then there exist absolute constants $C_1, \nC_2 , \\eta^*>0$ such that for all $\\eta \\leq \\eta^*$,\n\\begin{equation}\n\\label{main_prop}\n\\lim_{n\\to \\infty}\\mathbb{P}\\big(n \\mathrm{e}^{- C_1 \/ \\sqrt{\\eta}} \\leq L(n,\\lambda)\\leq n \n\\mathrm{e}^{- C_2 \/ \\sqrt{\\eta}}\\big) = 1\\,.\n\\end{equation}\n\\end{theorem}\n\nThe study of the object $L(n, \\lambda)$ was initiated by Aldous \\cite{Aldous98} where a phase \ntransition was discovered at the threshold $\\mathrm{e}^{-1}$. It was shown that with high \nprobability $L(n, \\lambda)$ is of order $\\log n$ for $\\lambda < \\mathrm{e}^{-1}$ and $L(n, \n\\lambda)$ is of order $n$ when $\\lambda > \\mathrm{e}^{-1}$. The critical behavior was established \nin \\cite{Ding13}, where it was proved that with high probability $L(n, \\lambda)$ is of order \n$(\\log n)^3$ when $\\lambda$ is around $\\mathrm{e}^{-1}$ within a window of order $(\\log n)^{-2}$. \nOur Theorem~\\ref{Prop} describes the behavior in the near-supercritical regime, and in \nparticular states that $L(n, \\lambda)\/n$ is a stretched exponential in $\\eta$ with $\\eta = \\lambda \n- \\mathrm{e}^{-1} \\downarrow 0$. Another interesting result proved in \\cite{Ding13} states that \n$L(n, \\lambda) \\geq n^{1\/4}$ in a somewhat similar regime namely $\\lambda \\geq 1 \/ \\mathrm{e} \n+ \\beta(\\log n)^{-2}$, where $\\beta > 0$ is an absolute constant. Notice that substituting $\\eta = \nC(\\log n)^{-2}$ in \\eqref{main_prop}, we indeed get a fractional power of $n$. In fact our method \nshould work, subject to some technical modifications, all the way down to $\\eta = C(\\log \nn)^{-2}$ for a large absolute constant $C$. However, we do not attempt any rigorous proof of \nthis in the current paper. \n\nA highly related question is the length for the cycle of minimal mean weight, which was studied by \nMathieu and Wilson \\cite{MW13}. An interesting phase transition was found in \\cite{MW13} with \ncritical threshold $\\mathrm{e}^{-1}$ on the mean weight. Further results on this problem have been \nproved in \\cite{DSW}. It might be relevant to mention here that the method used in \\cite{DSW} \ncould be potentially useful for nailing down the second phase transition detected in \n\\cite{Ding13}, namely the transition from $\\eta = \\alpha (\\log n)^{-2}$ to $\\eta = \\beta (\\log \nn)^{-2}$ where $\\alpha, \\beta$ are positive constants.\n\nAnother related question is the classical travelling salesman problem (TSP), where one minimizes \nthe weight of the path subject to passing through every single vertex in the graph. For the TSP in \nthe mean-field set up, W\\\"{a}stlund \\cite{Wastlund10} established the sharp asymptotics for more \ngeneral distributions on the edge weight, confirming the Krauth-M\\'{e}zard-Parisi conjecture\n\\cite{MP86b, MP86, KM89}. Indeed, it is an interesting challenge to give a sharp estimate on $L(n, \n\\lambda)$ for $\\mathrm{e}^{-1} <\\lambda < \\lambda^*$ (here $\\lambda^*$ is the asymptotic value for \nTSP), interpolating the critical behavior and the extremal case of TSP. A question of the same \nflavor on steiner tree is given in \\cite{Bollobas04}.\n\nOne can also look at the maximum size of tree with average weight below a certain threshold, \nwhere a phase transition was proved in \\cite{Aldous98}. The extremal case of the question on the \ntree with minimal average weight is the well-known minimal spanning tree problem, where a \n$\\zeta(3)$ limit is established by Frieze \\cite{Frieze85}.\n\n\\noindent {\\bf Main ideas of our proofs.} A straightforward first moment computation as done in \n\\cite{Aldous98} implies that $\\lim_{n \\to \\infty}\\mathbb{P}\\big(L(n, \\lambda) = O(\\log n)\\big) = \n1$ when $\\lambda < 1\/\\mathrm{e}$ (see also \\cite[Theorem 1.3]{Ding13}). For $\\lambda> \n1\/\\mathrm{e}$, a sprinkling method was employed in \\cite{Aldous98} to show that with high \nprobability $L(n, \\lambda) = \\Theta(n)$. The author first proved that with high probability there \nexist a large number of paths with average weight slightly above $1\/\\mathrm{e}$ and then used a \ncertain greedy algorithm to connect these paths into a single long path with average weight \nslightly above $1\/\\mathrm{e}$. However, the method in \\cite{Aldous98} was not able to describe the \nbehavior at criticality. In \\cite{Ding13} (see also \\cite{MW13} for the cycle with minimal average \nweight), a second moment computation was carried out restricted to paths of average weight below \n$1\/\\mathrm{e}$ and with the maximal deviation (defined in \\eqref{eq-def-M} below) at most $O(\\log \nn)$, thereby yielding that with high probability $L(n, 1\/\\mathrm{e}) = \\Theta((\\log n)^3)$. A \ncrucial fact responsible for the success of the second moment computation is that the length of \nthe target path is $\\Theta((\\log n)^3) \\ll \\sqrt{n}$. As such, a straightforward adaption of this \nmethod would not be able to succeed in the regime considered by this paper. \n\nTSP, where one studies paths (cycles) that visit every single vertex, is in a sense analogous to \nthe question of finding the minimal value $\\lambda$ for which $L(n, \\lambda) = n$ with high \nprobability. W\\\"{a}stlund \\cite{Wastlund10} showed that the minimum average cost of TSP converges \nin probability to a positive constant by relaxing it to a certain linear optimization problem. But \nit seems difficult to extend his method to ``incomplete'' TSP i.e. when the target object is the \nminimum cost cycle having at least $pn$ many edges for some $p \\in (0,1)$. Since our problem is in \na sense dual to incomplete TSP in the regime we are interested in, the method of \\cite{Wastlund10} \ndoes not seem to be suitable for our purpose either. In the current work, our method is inspired \nby the (first and) second moment method from \\cite{Ding13, MW13} as well as the sprinkling method \nemployed in \\cite{Aldous98}. \n\nIn order to prove the upper bound, our main intuition is that if $L(n, \\lambda)$ were greater than \n$\\mathrm{e}^{-C_2\/\\sqrt{\\eta}} n$ then we would have a larger number of short and light paths (a \nlight path refers to a path with small average weight --- at most a little above $1\/\\mathrm{e}$) \nthan we would typically expect. Formally, let $\\ell = \\frac{c_1}{\\eta}$ where $c_1$ is a small \npositive constant, and consider the number of paths (denoted by $N_{\\eta \/ c_1, c_2}$) with length \n$\\ell$ and total weight no more than $\\lambda \\ell - c_2 \\sqrt{\\ell}$ for a positive constant \n$c_2$. We call such a path a \\emph{downcrossing}. A straightforward computation gives $\\mathbb E \nN_{\\eta \/ c_1, c_2} = O(1) n \\ell \\mathrm{e}^{-c_3\/\\sqrt{\\eta}}$ for a positive constant $c_3$ \ndepending on $c_1$ and $c_2$. Now we consider the number of paths (denoted by $N_\\delta$) of \nlength $\\delta(\\lambda) n$ and average weight at most $\\lambda$. Such paths have two \npossibilities: (1) The path contains substantially more than $\\mathbb E N_{\\eta \/ c_1, c_2}$ many \ndowncrossings, which is unlikely by Markov's inequality. (2) The path does not have substantially \nmore than $\\mathbb E N_{\\eta \/ c_1, c_2}$ many downcrossings. This is also unlikely for the \nfollowing reasons: (a) A straightforward first moment computation gives that $\\mathbb E N_\\delta \n= O(n) \\mathrm{e}^{c_4 \\delta n \\eta}$ for a constant $c_4 > 0$; (b) The number of downcrossings \nalong a path of this kind, or a random variable that is ``very likely'' smaller, should dominate a \nBinomial random variable $\\mathrm{Bin}(\\delta n\/\\ell, c_5)$ where $c_5 > 0$ is an absolute \nconstant (since in the random walk bridge, every subpath of size $\\ell$ has a positive chance to \nhave such a downcrossing). If we choose $\\delta$ suitably large as in Theorem~\\ref{Prop}, we are \nsuffering a probability cost for the constraint on the number of downcrossings (probability for a \nbinomial much smaller than its mean) and this probability cost is of magnitude $\\mathrm{e}^{-c_6 \n\\delta n \/\\ell}$ for a constant $c_6 > 0$ depending in $c_1$. If we choose $c_1$ small enough this \nprobability cost kills the growth of $\\mathrm{e}^{c_4 \\delta n \\eta}$ in $\\mathbb E N_\\delta$. \nTherefore, paths of this kind do not exist either. The details are carried out in \nSection~\\ref{sec-upper}.\n\n\nFor the lower bound, our proof consists of two steps. In light of the preceding discussion, we \ncannot hope to directly apply a second moment method from \\cite{Ding13, MW13} to show the \nexistence of a light path that is of length linear in $n$. As such, in the first step of our proof \nwe prove that with high probability there exists a linear (in $n$) number of disjoint paths, each \nof which has weight slightly below $\\lambda$ and is of length $\\mathrm{e}^{c_7\/\\sqrt{\\eta}}$ for \nan absolute constant $c_7>0$. This is achieved by two second moment computations, which are \nexpected to succeed as the length of the path under consideration is $\\ll \\sqrt{n}$ (indeed \nremains bounded as $n\\to \\infty$). In the second step, we propose an algorithm which, with \nprobability going to 1, strings together a suitable collection of these short light paths to form \na light path of length $\\mathrm{e}^{-c_8\/\\sqrt{\\eta}}n$ for an absolute constant $c_8>0$. Our \nalgorithm is similar to the greedy algorithm (or in a different name exploration process) employed \nin \\cite{Aldous98}. But in order to ensure that the additional weight introduced by these \nconnecting bridges only increases the average weight of the final path by at most a multiple of \n$\\eta$, we have to use a more delicate algorithm. The details are carried out in Section~\\ref{sec-lower}.\n\n\\noindent {\\bf Notation convention.} \n For a graph $G$, we denote by $V(G)$ and $E(G)$ the set of vertices and edges of $G$ \n respectively. A path in a graph $G$ is an (finite) ordered tuple of vertices $(v_0, v_1, \\cdots, \n v_m)$, all distinct. For a path $\\pi = (v_0, v_1, \\cdots, v_m)$, we also use $\\pi$ to denote the \n graph whose vertices are $v_0, v_1, \\cdots, v_m$ and edges are $(v_0, v_1), \\cdots, (v_{m-1}, \n v_m)$. This would be clear from the context. The weight of an edge $e$ in $\\mathcal{W}_n$ is denoted by $W_e$ and we define the total weight $W(\\pi)$ of a path $\\pi$ as $\\sum_{e \\in E(\\pi)} W_{e}$. The collection of all paths in $\\mathcal{W}_n$ of length $\\ell \\in [n]$ is denoted as $\\Pi_\\ell$. We let $\\lambda = 1\/\\mathrm{e} + \\eta$ where $\\eta$ is a fixed positive number. A path is called \\emph{$\\lambda$-light} if its average weight is at most $\\lambda$, and a path is called \\emph{$(\\lambda, C)$-light} if its total weight is at most $\\lambda \\ell - C\\sqrt{\\ell}$ where $\\ell$ is length of \n the path. For nonnegative real or integer valued variables $x_0, x_1, \\cdots, x_n$, let $S$ be a \n statement involving $x_0, x_1, \\cdots, x_n$. We say that $S$ holds ``for large $x_0$ (given \n $x_1, \\cdots, x_n$)'' or ``when $x_0$ is large (given $x_1, \\cdots, x_n$)'' if it holds for any \n fixed values of $x_1, \\cdots, x_n$ in their respective domains and $x_0 \\geq a_0$ where $a_0$ is \n some positive number depending on the fixed values of $x_1, \\cdots, x_n$. In case $a_0$ is an \n absolute constant, the phrase ``(given $x_1, \\cdots, x_n$)'' will be dropped. We use ``for small \n $x_0$'' or ``when $x_0$ is small'' with or without the qualifying phrase ``(given $x_1, x_2, \n \\cdots, x_n$)'' in similar situations if the statement $S$ holds instead for $0 < x_0 \\leq a_0$. \n Throughout this paper the order notations $O(.), \\Theta(.), o(.)$ etc.~are assumed to be with \n respect to $n \\to \\infty$ while keeping all the other involved parameters (such as $\\ell$, $\\eta$ \n etc.) fixed. We will use $C_1, C_2, \\ldots$ to denote constants, and each $C_i$ will denote the \n same number throughout of the rest of the paper.\n\\smallskip\n\n\\noindent {\\bf Acknowledgements.} We are grateful to David Aldous for very useful discussions, and \nwe thank an anonymous referee for a careful review of an earlier manuscript and suggesting a \nsimpler proof of Lemma~\\ref{count_vertices}.\n\n\\section{Proof of the upper bound}\n\\label{sec-upper}\n\nLet $\\eta'$ be a multiple of $\\eta$ by a constant bigger than 1 whose precise value is to be \nselected. Set $\\ell = \\lfloor 1 \/ \\eta' \\rfloor$ and let $N_{\\eta'}$ be the number of ``$(\\lambda, \n1)$-light'' paths of length $\\ell$. We assume $\\eta < 1$ so that $\\ell \\geq 1$. As outlined in the \nintroduction, we shall first control $N_{\\eta'}$. \n\nIt is clear that the distribution of the total weight of a path of length $k$ follows a Gamma \ndistribution $\\Gamma(k, 1\/n)$, where the density $f_{\\theta, k}(z)$ of $\\mathrm{Gamma}(k, \\theta)$ \nis given by\n\\begin{equation}\n\\label{gamma_density}\nf_{\\theta, k}(z) = \\theta^k z^{k - 1}\\mathrm{e}^{-\\theta z} \/ (k - 1)!\\mbox{ for all }z \\geq 0, \n\\theta > 0\\mbox{ and }k\\in \\mathbb{N}.\n\\end{equation}\nBy \\eqref{gamma_density} and the Stirling's formula, we carry out a straightforward computation \nand get that\n\\begin{eqnarray}\n\\mathbb{E} N_{\\eta'} & = & (1 + o(1)) \\times n^{\\ell+1} \\times \\mathbb{P}\\Big ( \\mathrm{Gamma}\n(\\ell,1\/n) \\leq \\lambda \\ell - \\sqrt{\\ell} \\Big ) \\nonumber\\\\\n\t\t\t& = & (1 + o(1)) \\times n^{\\ell+1}\\times \\frac{\\mathrm{e}^{-(\\lambda \\ell - \n\t\t\t\\sqrt{\\ell})\/n}(\\lambda \\ell - \\sqrt{\\ell})^{\\ell}}{\\ell!n^{\\ell}}\\nonumber\\\\\n\\label{first_bnd}& = & (1 + o(1))C_0(\\eta)\\alpha\\mathrm{e}^{\\mathrm{e}\\eta\/\\eta'} \\sqrt{\\eta'} \n\\mathrm{e}^{-\\mathrm{e}\/\\sqrt{\\eta'}}n,\n\\end{eqnarray}\nwhere $C_0(\\eta) \\to 1$ as $\\eta \\to 0$, and $\\alpha$ is a positive constant. Furthermore the \nfactors $1 + o(1)$ are strictly less than 1.\n\\\\\nWe also need a bound on the second moment of $N_{\\eta'}$ to control its concentration around \n$\\mathbb{E}N_{\\eta'}$. For $\\gamma \\in \\Pi_\\ell$, define $F_{\\gamma}$ to be the event that \n$\\gamma$ is $(\\lambda, 1)$-light. Then clearly we have $N_{\\eta'} = \\sum_{\\gamma \\in \n\\Pi_\\ell}\\mathbf{1}_{F_{\\gamma}}$. In order to compute $\\mathbb E (N_{\\eta'})^2$, we need to estimate \n$\\mathbb{P}(F_{\\gamma} \\cap F_{\\gamma'})$ for $\\gamma, \\gamma' \\in \\Pi_\\ell$. In the case \n$E(\\gamma) \\cap E(\\gamma') = \\emptyset$, we have $F_\\gamma$ and $F_{\\gamma'}$ independent of each \nother and thus $\\mathbb{P}(F_{\\gamma'}|F_{\\gamma}) = \\mathbb{P}(F_{\\gamma'})$. In the case $|\nE(\\gamma) \\cap E(\\gamma')| = j > 0$, we have\n\\begin{eqnarray}\n\\label{cond_prob_order}\n\\mathbb{P}(F_{\\gamma'}|F_{\\gamma}) &\\leq & \\mathbb{P}\\big(\\mathrm{Gamma}(\\ell-j,1\/n) \\leq \\lambda \n\\ell\\big)\n \\leq \\tfrac{1}{(\\ell - j)!}\\tfrac{(\\lambda \\ell)^{\\ell - \n j}}{n^{\\ell - j}}.\n\\end{eqnarray}\nFurther notice that if $|E(\\gamma) \\cap E(\\gamma')| = j$, then $|V(\\gamma) \\cap V(\\gamma')|$ is at \nleast $j + 1$ as $\\gamma \\cap \\gamma'$ is acyclic. So given any $\\gamma \\in \\Pi_\\ell$, the number \nof paths $\\gamma'$ such that $|E(\\gamma) \\cap E(\\gamma')| = j$ is at most $O(n^{\\ell - j})$. \nAltogether, we obtain that\n\\begin{align}\n\\mathbb{E} N_{\\eta'}^2 & = \\mbox{$\\sum_{\\gamma, \\gamma' \\in \\Pi_\\ell}$}\\mathbb{P}(F_{\\gamma}\\cap \nF_{\\gamma'}) = \\mbox{$\\sum_{\\gamma \\in \\Pi_\\ell}$}\\mathbb{P}(F_{\\gamma})\\mbox{$\\sum_{\\gamma' \\in \n\\Pi_\\ell}$}\\mathbb{P}(F_{\\gamma'}|F_{\\gamma}) \\nonumber \\\\\n & \\leq \\mbox{$\\sum_{\\gamma \\in \\Pi_\\ell}$}\\mathbb{P}\n (F_{\\gamma})\\Big(\\sum_{\\gamma':E(\\gamma' \\cap \n \\gamma)=\\emptyset}\\mathbb{P}(F_{\\gamma'}) + \\sum_{1 \\leq j \\leq \n \\ell} \\sum_{\\gamma':|E(\\gamma' \\cap \\gamma)|=j}\\frac{1}{(\\ell - \n j)!}\\frac{(\\lambda \\ell)^{\\ell - j}}{n^{\\ell - j}}\\Big) \\nonumber\\\\\n & = \\mbox{$\\sum_{\\gamma \\in \\Pi_\\ell}$}\\mathbb{P}\n (F_{\\gamma})\\Big(\\sum_{\\gamma':E(\\gamma' \\cap \n \\gamma)=\\emptyset}\\mathbb{P}(F_{\\gamma'}) + \n \\sum_{1 \\leq j \\leq \n \\ell}\\frac{O(n^{\\ell - j})}{(\\ell - \n j)!}\\frac{(\\lambda \\ell)^{\\ell - j}}{n^{\\ell - \n j}}\\Big) \\nonumber\\\\\n &\\leq \\mbox{$\\sum_{\\gamma \\in \\Pi_\\ell}$}\\mathbb{P}\n (F_{\\gamma})\\Big(\\mathbb{E}N_{\\eta'} + O(1) \\Big) = \n \\mathbb{E}N_{\\eta'}\\Big(\\mathbb{E}N_{\\eta'} + O(1) \\Big). \n \\label{first_second_moment}\n\\end{align}\nSince $\\mathbb E N_{\\eta'} = \\Omega(1)$ as implied by \\eqref{first_bnd}, \\eqref{first_second_moment} \nyields that\n\\begin{equation}\n\\label{concentration_1}\n\\mathbb{E}N_{\\eta'}^2 = (\\mathbb{E}N_{\\eta'})^2 (1 + o(1)).\n\\end{equation}\nAs a consequence of Markov's inequality (applied to $|N_{\\eta'} - \\mathbb E N_{\\eta'} |^2$), we get that\n\\begin{equation}\n\\label{concentration_2}\n\\mathbb{P}\\big(N_{\\eta'} \\geq 2 \\mathbb{E}N_{\\eta'}\\big) = o(1).\n\\end{equation}\n\nNext, we set out to show that any long $\\lambda$-light path should have a large number of subpaths \nwhich are $(\\lambda, 1)$-light. Let $\\pi$ be a path of length $\\delta n$ for some $\\delta > \n0$. Denote its successive edge weights by $X_1, X_2, \\ldots X_{\\delta n}$ and let $S_k = \n\\sum_{i=1}^k X_i$ for $1\\leq k\\leq \\delta n$. Probabilities of events involving edge weights of \n$\\pi$, unless specfically mentioned, will be assumed to be conditioned on ``$\\{A(\\pi) \\leq \n\\lambda\\}$'' throughout the remainder of this section. Now divide $\\pi$ into edge-disjoint \nsubpaths of length $\\ell$ (with the last subpath of length possibly less than $\\ell$ in the case \n$\\ell$ does not divide $\\delta n$) and denote the $k$-th subpath by $b_k^{\\pi}$ for $1 \\leq k \n\\leq \\delta n \/ \\ell$. Call any such subpath a downcrossing if it is $(\\lambda, 1)$-light. Let \n$D_{k, \\eta', n}^{\\pi}$ be the event that $b_k^\\pi$ is a downcrossing. The following \nwell-known result about exponential random variables (see, e.g., \\cite[Theorem 6.6]{Dasgupta2011}) \nwill be very useful.\n\n\\begin{lemma}\n\\label{Beta}\nLet $W_1, W_2, \\ldots, W_N$ be i.i.d.\\ exponential random variables with mean $1\/\\theta$, and let \n$S_k = \\sum_{i=1}^k W_i$ for $1\\leq k\\leq N$. Then the random vector $(\\frac{W_1}{S_N},\\ldots, \n\\frac{W_{N-1}}{S_N})$ follows $\\mathrm{Dirichlet}(\\mathbf{1}_{N})$ distribution, $S_N$ follows \n$\\mathrm{Gamma}(N;\\theta)$ distribution, and they are independent of each other. Here \n$\\mathbf{1}_{N}$ is the $N$-dimensional vector whose all entries are $1$.\n\\end{lemma}\nWe will also require the following simple lemma which we prove for sake of completeness. \n\\begin{lemma}\n\\label{Azuma}\nLet $Z_1, Z_2, \\ldots, Z_N$ be i.i.d.\\ exponential random variables with mean 1 and let $S_N = \n\\sum_{i = 1}^N S_N$. Then\n\\begin{align}\n\\mathbb{P}(S_N \\geq N + \\alpha) &\\leq \\mathrm{e}^{-\\alpha^2\/4N} \\mbox{ for all } 0<\\alpha \\leq (2 \n- \\sqrt{2})N\\,, \\label{Azuma1}\\\\\n\\mathbb{P}(S_N \\leq N - \\alpha) &\\leq \\mathrm{e}^{-\\alpha^2\/2N}\\,, \\mbox{ for all } \\alpha>0\\,. \n\\label{Azuma2}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nBy Markov's inequality, we get that for any $\\alpha > 0$ and $0 < \\theta < 1$,\n$$\\mathbb{P}(S_N \\geq N + \\alpha) = \\mathbb P(e^{\\theta S_N} \\geq e^{\\theta (N+ \\alpha)}) \\leq \n\\tfrac{\\mathrm{e}^{-\\theta N - \\theta \\alpha}}{(1-\\theta)^N}.$$\nWhen $ \\theta \\leq 1 - 1\/\\sqrt{2}$, the right hand side is bounded above by $\\mathrm{e}^{N \n\\theta^2 - \\alpha\\theta}$. So setting $\\theta = \\alpha \/ 2N$ yields \\eqref{Azuma1} as long as $0 < \n\\alpha \/ 2N \\leq 1 - 1\/\\sqrt{2}$. One can prove \\eqref{Azuma2} in the same manner.\n\\end{proof}\n \nAs hinted in the introduction, let us begin with the intent to prove that the number of \ndowncrossings along the first half of $\\pi$ (or any fraction of it) dominates a Binomial random \nvariable $\\mathrm{Bin}(\\delta n \/ 2\\ell, p)$ for some positive, absolute constant $p$. So \nessentially we need to prove that a subpath $b_k^{\\pi}$ can be a downcrossing with probability $p$ \nregardless of the first $(k-1)\\ell$ edges of $\\pi$ that precede it. Now conditional distribution \nof $X_{(k-1)\\ell + 1}, X_{(k-1)\\ell + 2}, \\cdots, X_{\\delta n}$ given $X_1, X_2, \\cdots, \nX_{(k-1)\\ell}$ and $A(\\pi) \\leq \\lambda$ is essentially the the distribution of $X_{(k-1)\\ell + \n1}, X_{(k-1)\\ell + 2},$ $\\cdots, X_{\\delta n}$ conditioned on $\\sum_{i = (k - 1)\\ell + 1}^{\\delta \nn}X_i \\leq \\lambda \\delta n - S_{(k - 1)\\ell}$. On the other hand we get from Lemma~\\ref{Beta} \nthat conditional mean and variance of $W(b_k^\\pi)$ given $S_{\\delta n} - S_{(k-1)\\ell} = \\mu \n(\\delta n - (k-1)\\ell)$ are $\\mu \\ell$ and $\\mu^2 \\ell(1 + o(1))$ respectively for all $\\mu > 0$ \nand $k \\leq \\delta n \/ 2$. Hence it is plausible to expect that probability of the event \n$\\{W(b_k^{\\pi}) \\leq \\Lambda_k(\\ell - C\\sqrt{\\ell})\\}$ conditional on any set of values for $X_1, \nX_2, \\cdots, X_{(k - 1)\\ell}$ is bounded away from 0 for large $\\ell$ and $n$, where $\\Lambda_k \n= \\Lambda_k^\\pi = (S_{\\delta n} - S_{(k-1)\\ell})\/(\\delta n - (k-1)\\ell)$ and $C$ is some positive \nnumber. Let us denote the event $\\{W(b_k^{\\pi}) \\leq \\Lambda_k(\\ell - C\\sqrt{\\ell})\\}$ by $A_{k, \n\\eta', \\delta, n}^{C, \\pi}$. Thus it seems more immediate to prove the stochastic domination for \nnumber of occurrences of $A_{k, \\eta', n}^{C, \\pi}$'s which, for the time being, can be treated as \na ``proxy'' for the number of downcrossings. The formal statement is given in the next lemma where \nwe use $6$ as the value of $C$ since this allows us to avoid unnecessary named variables and also \nsuits our specific needs for the computations carried out at the end of this section.\n\n\\begin{lemma}\n\\label{drop_prob}\nLet $N_{\\eta', n}^\\pi$ be the number of occurrences of events $A_{k, \\eta', n}^{\\pi} = \nA_{k, \\eta', n}^{6, \\pi}$ for $1 \\leq k \\leq \\delta n \/ 2\\ell$. Then for any $0 < \\eta' < \\eta_0$ \nwhere $\\eta_0$ is a positive, absolute constant and any $0 < \\delta_0 < 1$ there exists a positive \ninteger $n_d = n_d(\\delta_0, \\eta')$ and an absolute constant $c > 0$ such that for all $\\delta \n\\geq \\delta_0$ and $n \\geq n_d $ the conditional distribution of $N_{\\eta',n}^\\pi$ given \n$\\{A(\\pi) \\leq \\lambda\\}$ stochastically dominates the binomial distribution $\\mathrm{Bin}(\\delta \nn \/2\\ell, c)$. \n\\end{lemma}\n\n\\begin{proof}\nNotice that it suffices to prove that there exist positive absolute constants $\\ell_0, c$ such \nthat uniformly for $\\mu > 0$, $\\ell \\geq \\ell_0$ and large $L$ (given $\\ell$)\n$$\\mathbb{P}(S_{\\ell} \\leq \\tfrac{S_L}{L}(\\ell - 6\\sqrt{\\ell}) | S_L = \\mu L)\\geq c\\,.$$ To this \nend, we see that for $L > \\ell$\n\\begin{eqnarray}\n\\mathbb{P}(S_{\\ell} \\leq \\tfrac{S_L}{L}(\\ell - 6\\sqrt{\\ell}) | S_L = \\mu L) = \\mathbb{P}\n(\\tfrac{S_{\\ell}}{S_L} \\leq (\\ell - 6\\sqrt{\\ell})\/L | S_L = \\mu L) = \\mathbb{P}(\\tfrac{S_{\\ell}}\n{S_L} \\leq (\\ell - 6\\sqrt{\\ell})\/L) \\,,\\label{ratio_prob} \n\\end{eqnarray}\nwhere the last equality follows from Lemma~\\ref{Beta}. Since distribution of $\\frac{S_{\\ell}}\n{S_L}$ does not depend on the mean of the underlying $X_j$'s, we can in fact assume that $X_j$'s \nare i.i.d.\\ exponential variables with mean 1 for purpose of computing \\eqref{ratio_prob}. By \n\\eqref{Azuma2}, we have \n$$\\mathbb{P}\\big(S_L\/L \\leq 1 - 1\/(2\\sqrt{\\ell})\\big) \\leq \\mathrm{e}^{-L \/ 8 \\ell}\\,.$$\n So for $\\ell - 6\\sqrt{\\ell} > 0$, we get\n\\begin{equation}\n\\label{prob_ineq}\n\\mathbb{P}(S_{\\ell} \\leq \\tfrac{S_L}{L}(\\ell - 6\\sqrt{\\ell})) \n\\geq \\mathbb{P}(S_{\\ell} \\leq \\ell - 6.5\\sqrt{\\ell}) - \\mathrm{e}^{-L \/ 8 \\ell}.\n\\end{equation}\nBy central limit theorem there exist absolute numbers $\\ell_0, c'>0$ such that $\\mathbb{P}\n(S_{\\ell} \\leq \\ell - 6.5\\sqrt{\\ell}) \\geq c'$ for $\\ell \\geq \\ell_0$. Hence from \n\\eqref{prob_ineq} it follows that for any $\\ell \\geq \\ell_0$ there exists $L_0 = L_0(\\ell)$ such \nthat the right hand side of \\eqref{ratio_prob} is at least $c = 0.99c'$ for $L \\geq L_0$.\n\\end{proof}\nNow what remains to show is that the number of downcrossings $\\tilde N_{\\eta', n}^\\pi$ \nalong $\\pi$ is bigger than $N_{\\eta', n}^\\pi$ with high probability. Notice that the \noccurrence of $A_{k, \\eta', n}^{\\pi} \\setminus D_{k, \\eta', n}^{\\pi}$ implies that \n$\\Lambda_k$ must be ``significantly'' above $\\lambda$. But that can only be caused by a \nsubstantial drop in $S_k$ for some $1 \\leq k \\leq \\delta n \/2$, an event that occurs with small \nprobability.\n\\begin{lemma}\n\\label{slope_rise}\nDenote by $E_{\\eta', n}^\\pi$ the event that $\\Lambda_k$ is more than $\\lambda + \\sqrt{\\eta'}$ for \nsome $1 \\leq k \\leq \\frac{\\delta n}{2\\ell}$. Then for any $0 < \\eta' < 1\/4$ and $0 < \\delta_0 < 1$ \nthere exists a positive integer $n_s = n_s(\\delta_0, \\eta')$ such that,\n\\begin{equation}\n\\label{Ineq2}\n\\mathbb{P}(E_{\\eta', n}^\\pi| A(\\pi) \\leq \\lambda) \\leq 2n\\mathrm{e}^{-\\delta n\\eta' \/ 16} \\mbox{ \nfor all } \\delta \\geq \\delta_0 \\mbox{ and }n \\geq n_s\\,.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nFor $1 \\leq k \\leq \\delta n\/2\\ell$, let $\\ell_k = (k - 1)\\ell$, $n_s = \\lceil 2\\ell \/ \\delta_0 \n\\rceil$ and $E_{k, \\eta', n}^\\pi = \\{\\Lambda_k\\geq \\lambda + \\sqrt{\\eta'}\\}$. Assume $n \\geq n_s$ \nso that $\\delta n \/ 2\\ell \\geq 1$. On $E_{k, \\eta', \nn}^\\pi$, we have\n\\begin{eqnarray*}\n\\frac{S_{\\ell_k}}{S_{\\delta n}} \\leq \\frac{\\ell_k S_{\\delta n} \/ \\delta n - \\sqrt{\\eta'}(\\delta n \n- \\ell_k)}{S_{\\delta n}} \\leq \\frac{\\ell_k}{\\delta n} - \\frac{\\sqrt{\\eta'}(\\delta n - \\ell_k)}\n{\\delta n}\\,,\n\\end{eqnarray*}\nwhere the last inequality holds since we are conditioning on $S_{\\delta n} \\leq \\lambda \\delta n$ \nand $\\lambda \\leq 1$ when $\\eta' < 1\/4$ (recall that $\\eta < \\eta'$). Therefore, we get\n\\begin{equation}\\label{slope_change}\n\\mathbb{P}(E_{k,\\eta',n}^\\pi | A(\\pi) \\leq \\lambda) \n\\leq \\mathbb{P}(S_{\\ell_k} \\leq \\tfrac{S_{\\delta n}}{\\delta n}\\big(\\ell_k - \\sqrt{\\eta'}(\\delta n \n- \\ell_k)))\n\\end{equation}\nNow we evaluate the right hand side of \\eqref{slope_change}. Analogous to \\eqref{ratio_prob} in \nthe proof of Lemma~\\ref{drop_prob}, we can assume without loss of generality that $X_1, \nX_2, \\ldots X_L$ are i.i.d.\\ exponential variables with mean 1.\nIt is routine to check that $$(1+\\sqrt{\\eta'}\/2)\\times \\big(\\ell_k - \\sqrt{\\eta'}(\\delta n - \n\\ell_k)\\big) \\leq \\ell_k - \\sqrt{\\eta'}\\delta n\/4 \\,, \\mbox{ for all } 1\\leq k \\leq \\delta \nn\/2\\ell\\,.$$\nThus, for all $1 \\leq k \\leq \\delta n\/2\\ell$ we get\n\\begin{eqnarray*}\n\\mathbb{P}\\Big(S_{\\ell_k} \\leq \\frac{S_{\\delta n}}{\\delta n}\\big(\\ell_k - \\sqrt{\\eta'}(\\delta n - \n\\ell_k)\\big)\\Big) \n& \\leq & \\mathbb{P}\\Big(S_{\\ell_k} \\leq \\ell_k - \\sqrt{\\eta'}\\delta n\/4 \\Big) + \n\\mathbb{P}\\Big(\\frac{S_{\\delta n}}{\\delta n} \\geq 1 + \\sqrt{\\eta'}\/2\\Big)\\\\\n& \\leq & \\mathrm{e}^{-\\delta n \\eta' \/ 16} + \\mathrm{e}^{-\\delta n \\eta'\/ 16},\n\\end{eqnarray*}\nwhere the second inequality follows from \\eqref{Azuma2} and \\eqref{Azuma1} respectively. Combined \nwith \\eqref{slope_change}, it gives that\n\\begin{equation*}\n\\mathbb{P}(E_{k,\\eta',n}^\\pi|A(\\pi) \\leq \\lambda) \\leq 2\\mathrm{e}^{- \\delta n \\eta'\/16}\\,, \\mbox{ \nfor all }1 \\leq k \\leq \\delta n\/2\\ell\\,.\n\\end{equation*}\nAn application of a union bound over $k$ completes the proof of the lemma.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{Prop}: upper bound]\nAssume that $\\eta' < 1\/4 \\wedge \\eta_0$ where $\\eta_0$ is same as given in the statement of \nLemma~\\ref{drop_prob}. Fix a $\\delta_0 = \\delta_0(\\eta')$ in $(0,1)$ and let $n_0 = n_0(\\delta_0, \n\\eta') = n_d(\\delta_0, \\eta') \\vee n_s(\\delta_0, \\eta')$, where $n_d, n_s$ are as stated in \nLemmas \\ref{drop_prob} and \\ref{slope_rise} respectively. In the remaining part of this section we \nwill assume that $n \\geq n_0$ and $\\delta \\geq \\delta_0$, so that Lemmas \\ref{drop_prob} and \n\\ref{slope_rise} become applicable. Now let $\\pi$ be a path with length $\\delta n$. From \nLemma~\\ref{slope_rise} we get that with large probability $\\Lambda_k \\leq \\lambda + \\sqrt{\\eta'}$ \nfor all $k$ between 1 and $\\delta n \/2\\ell$. But it takes a routine computation to show that \n$A_{n,k,\\eta'}^\\pi \\setminus \\{\\Lambda_k \\leq \\lambda + \\sqrt{\\eta'}\\}\\subseteq D_{n,k,\n\\eta'}^\\pi$ when $\\eta'$ is small. Thus $\\tilde N_{\\eta', n}^\\pi \\geq N_{\\eta', n}^\\pi$ except on \n$E_{\\eta', n}^\\pi$. Consequently Lemma~\\ref{drop_prob} allows us to use binomial distribution to \nbound quantities like $\\mathbb{P}(\\tilde N_{\\eta', n}^\\pi \\leq x)$ with a ``small error term'' \ncaused by the rare event $E_{\\eta', n}^\\pi$. Formally, \n\n\\begin{eqnarray*}\n\\mathbb{P}\\Big(\\tilde N_{\\eta' , n}^\\pi \\leq 2\\mathbb{E}N_{\\eta'} | A(\\pi) \\leq \n\\lambda\\Big) \n&\\leq & \\mathbb{P}\\Big(N_{\\eta' , n}^\\pi \\leq 2\\mathbb{E}N_{\\eta'} | A(\\pi) \\leq \n\\lambda\\Big) + \\mathbb{P}\\Big(E_{\\eta', n}| A(\\pi) \\leq \\lambda\\Big)\\\\\n& \\leq & \\mathbb{P}\\Big(N_{\\eta' , n}^\\pi \\leq 2\\mathbb{E}N_{\\eta'}|A(\\pi) \\leq \\lambda\\Big) + \n2n\\mathrm{e}^{-\\delta n \\eta'\/16}\\,,\n\\end{eqnarray*}\nwhere the last inequality follows from Lemma~\\ref{slope_rise}.\nTherefore, by Lemma~\\ref{drop_prob}, we get that\n\\begin{equation}\n\\mathbb{P}\\Big(\\tilde N_{\\eta', n}^\\pi \\leq 2\\mathbb{E}N_{\\eta'} | A(\\pi) \\leq \n\\lambda\\Big) \\leq \\mathbb{P}\\Big(\\mathrm{Bin}(\\delta n \/ 2\\ell, c) \\leq 2\\mathbb{E}N_{\\eta'}\\Big) \n+ 2n\\mathrm{e}^{-\\delta n \\eta'\/16} \\,.\\label{second_bd}\n\\end{equation} \nNext let us define a new event as \n$$\\Xi_{\\eta,\\delta_0,n} = \\mbox{$\\bigcup_{k \\geq \\delta_0 n}$ $\\bigcup_{\\pi \\in \n\\Pi_k}$}\\big\\{\\tilde N_{\\eta', n}^\\pi \\geq 2\\mathbb{E}N_{\\eta'}, A(\\pi) \\leq \\lambda \\big\\}.$$\nSo $\\Xi_{\\eta,\\delta_0,n}$ is the event that there exists a $\\lambda$-light path $\\pi$ with \n$len(\\pi) \\geq \\delta_0 n$ and which contains at least $2\\mathbb{E}N_{\\eta'}$ many \ndowncrossings. Thus occurrence of $\\Xi_{\\eta,\\delta_0,n}$ implies that $N_{\\eta'} \\geq \n2\\mathbb{E}N_{\\eta'}$ which has small probability owing to \\eqref{concentration_2}. On the other \nhand if $\\Xi_{\\eta,\\delta_0,n}$ does not occur, $L(n, \\lambda) \\geq \\delta_0 n$ implies the \nexistence of a $\\lambda$-light path of length at least $\\delta_0 n$ that has no more than \n$2\\mathbb{E}N_{\\eta'}$ many downcrossings. Formally,\n\n\\begin{align}\n\\mathbb{P}&\\big(L(\\lambda, n) \\geq \\delta_0 n \\big) = \\mathbb{P}\\big(\\Xi_{\\eta,\\delta_0,n}\\big) + \n\\mathbb{P}\\big(\\{L(\\lambda, n) \\geq \\delta_0 n\\} \n\\setminus \\Xi_{\\eta,\\delta_0,n}\\big)\\nonumber \\\\\n& \\leq \\mathbb{P}\\big(N_{\\eta'} \\geq \n 2\\mathbb{E}N_{\\eta'}\\big) + \n \\mathbb{P}\\Big(\\mbox{$\\bigcup_{k \\geq \\delta_0 \n n}$ $\\bigcup_{\\pi \\in \\Pi_k}$}\\big\\{\\tilde \n N_{\\eta', n}^\\pi \\leq 2\\mathbb{E}N_{\\eta'}, \n A(\\pi) \\leq \\lambda \\big\\}\\Big) \\nonumber\\\\\n & \\leq o(1) + \n \\mbox{$\\sum_{k \\geq \\delta_0 n}$ $\\sum_{\\pi \\in \n \\Pi_k}$}\\mathbb{P}\\Big(\\tilde N_{\\eta', n}^\\pi \\leq \n 2\\mathbb{E}N_{\\eta'} | A(\\pi) \\leq\n \\lambda\\Big)\\mathbb{P}\\big(A(\\pi) \\leq \\lambda\\big)\\,.\n \\label{break-up}\n\\end{align}\nNow choose $\\delta_0 = \\delta_0(\\eta')$ such that \n\\begin{equation}\n\\label{Ineq3}\n\\delta_0 n \\eta' c \/4 = 2\\mathbb{E}N_{\\eta'}\\,.\n\\end{equation}\nSince $1 \/ \\ell \\geq \\eta'$, we then get from Binomial concentration that for $\\delta \\geq \n\\delta_0$,\n$$\\mathbb{P}\\Big(\\mathrm{Bin}(\\delta n \/ 2\\ell, c) \\leq 2\\mathbb{E}N_{\\eta'}\\Big) \\leq \n\\mathrm{e}^{-\\delta n\\eta'c^2\/ 16}\\,.$$\nPlugging this into \\eqref{second_bd} we have\n\\begin{equation*}\n\\mathbb{P}\\Big(\\tilde N_{\\eta', n}^\\pi \\leq 2\\mathbb{E}N_{\\eta'} | A(\\pi) \\leq \n\\lambda\\Big) \\leq 2n\\mathrm{e}^{- len(\\pi)\\eta'\/16} + \\mathrm{e}^{- len(\\pi)\\eta'c^2\/ 16}\\,,\n\\end{equation*}\nwhenever $len(\\pi) \\geq \\delta_0 n$. A straightforward computation using \n\\eqref{gamma_density} yields\n$$\\mbox{$\\sum_{\\pi \\in \\Pi_k}$}\\mathbb{P}\\big(A(\\pi) \\leq \n\\lambda\\big) \\leq \\frac{n}{\\sqrt{2\\pi k}}\\mathrm{e}^{\\mathrm{e} k \\eta}\\,.$$\n\nThe last two displays and \\eqref{break-up} together imply that\n\\begin{equation}\n\\label{break_up2}\n\\mathbb{P}\\big(L(\\lambda, n) \\geq \\delta_0 n \\big) \\leq o(1) + \\mbox{$\\sum_{k \\geq \\delta_0 n}$}\n(2n\\mathrm{e}^{- k\\eta'\/16} + \\mathrm{e}^{- k\\eta'c^2\/ 16})\\mathrm{e}^{\\mathrm{e} k \n\\eta}\\frac{n}{\\sqrt{2\\pi k}}\\,.\n\\end{equation}\nSetting $\\eta' = 32\\mathrm{e}\\eta \/ c^2$ we get from \\eqref{break_up2},\n$$\\mathbb{P}\\big(L(n, \\lambda)\\geq \\delta_0 n\\big) = o(1)\\,.$$\nIt remains to be checked whether $\\delta_0$ obtained from \\eqref{Ineq3} has the correct functional \nform as in \\eqref{main_prop}. To this end recall from \\eqref{first_bnd} that\n$$2\\mathbb{E}N_{\\eta'} \\leq 3\\alpha \\mathrm{e}^{\\mathrm{e}\\eta\/\\eta'} \\sqrt{\\eta'} \n\\mathrm{e}^{-\\mathrm{e}\/\\sqrt{\\eta'}}n\\,,$$\nwhere $\\eta$ is small enough so that $C_0(\\eta)$ in \\eqref{first_bnd} is less than $3\/2$. Hence \n$\\delta_0 \\leq \\mathrm{e}^{-C_2 \/ \\sqrt{\\eta}}$ for some absolute constant $C_2$ when $\\eta$ is \nsmall.\n\n\n\\end{proof}\n\n\n\\section{Proof of the lower bound}\n\\label{sec-lower}\n\n\\subsection{Existence of a large number of vertex-disjoint light paths}\nAs we mentioned in the introduction, the proof of lower bound is divided into two steps. In the \nfirst step we split the vertices into two parts and show that there exist a large number of \nshort (i.e. of $O(1)$ length) vertex-disjoint $\\lambda$-light paths containing vertices from only \none part. In the second step we use vertices in the other part as ``links'' to connect a \nsubcollection of the short paths obtained from step 1 into a long (i.e. of $\\Theta(n)$ length) and \nlight path. The current and next subsections are devoted to these two steps in respective order.\n\\par\n\nIn light of the preceding discussion, let us first select a complete subgraph $\\mathcal W^*_n$ of \n$\\mathcal W_n$ containing $n_* = n_{*; \\eta, \\zeta_1} = (1 - \\zeta_1 \\eta)n$ vertices where $\\eta, \n\\zeta_1 \\in (0, 1)$. To be specific we can order the vertices of $\\mathcal W_n$ in some arbitrary \nway and define $\\mathcal W^*_n$ as the subgraph induced by ``first'' $n_*$ vertices. It will be \nshown that there are substantially many short and light paths that can be formed with the vertices \nin $V(\\mathcal W_n^*)$. We will in fact require slightly more from a path than just being \n$\\lambda$-light. For $\\pi \\in \\Pi_\\ell$ and some $\\zeta_2 > 0$, define\n\n\\begin{equation}\nG_{\\pi} = G_{\\pi; \\eta, \\zeta_2} = \\Big \\{\\lambda \\ell - 1 \\leq W(\\pi) \\leq \\lambda \\ell, \nM(\\pi) \\leq (\\zeta_2\/\\sqrt{\\eta}).(W(\\pi)\/\\lambda \\ell)\\Big \\}\\,,\n\\label{good_event}\n\\end{equation}\n\nwhere $M(\\pi)$ is the maximum deviation of $\\pi$ away from the linear interpolation between the \nstarting and ending edges, formally given by\n\\begin{equation}\\label{eq-def-M}\nM(\\pi) = \\sup_{1\\leq k \\leq \\ell}\\mid\\mbox{$\\sum_{i = 1}^{k}$}W_{e_i} - \\tfrac{k}\n{\\ell}W(\\pi)\\mid\\,.\n\\end{equation}\nA similar class of events were considered in \\cite{Ding13, MW13} in order for second moment \ncomputation. As the authors mentioned in these papers, the factor $W(\\pi)\/\\lambda \\ell$ provides \nsome technical ease in view of the following property which is a consequence of Lemma~\\ref{Beta}:\n\\begin{equation}\n\\mathbb{P}(M(\\pi) \\leq (\\zeta_2\/\\sqrt{\\eta}).(W(\\pi)\/\\lambda \\ell)\\mbox{ }|\\mbox{ }W(\\pi) = w) \n\\equiv \\mbox{constant for all }w > 0.\\label{cond_Property}\n\\end{equation}\nCall a path $\\pi\\in \\Pi_\\ell$ \\emph{good} if $G_{\\pi}$ occurs. Since we are only interested in \ngood paths whose vertices come from $V(\\mathcal W^*_n)$, we need some related notations. For $\\ell \n\\in \\mathbb N$, denote by $\\Pi_\\ell^* = \\Pi_{\\ell; \\eta, \\zeta_1}^*$ the set of all paths of \nlength $\\ell$ in $\\mathcal{W}_n^*$ and by $N^*_\\ell = N^*_{\\ell; \\eta, \\zeta_1, \\zeta_2}$ the \ntotal number of good paths in $\\Pi_\\ell^*$, i.e., $N^*_\\ell = \\sum_{\\pi \\in \n\\Pi_\\ell^*}\\mathbf{1}_{G_{\\pi}}$. In order to carry out second \nmoment analysis of $N^*_\\ell$ we need to control the correlation between $\\mathbf{1}_{G_{\\pi}}$ \nand $\\mathbf{1}_{G_{\\pi'}}$ where $\\pi$, $\\pi' \\in \\Pi_\\ell^*$. It is plausible that such \ncorrelation depends on the number of common edges between $\\pi$ and $\\pi'$ and in fact bounding \nthe correlation in terms of the number of common edges was sufficient for proving \n\\eqref{concentration_1} in Section \\ref{sec-upper}. But in this case we need an additional \nmeasurement instead of just $|E(\\pi) \\cap E(\\pi')|$. This is discussed in detail in \\cite{Ding13, \nMW13} and some of their results will be used. Let $\\pi$ be a path in $\\Pi_\\ell ^*$ and $S \n\\subseteq E(\\pi)$. A segment of $\\pi$ is called an $S$-component or a component of $S$ if it is a \nmaximal segment of $\\pi$ whose all edges belong to $S$. Notice that $S$-components can be defined \nsolely in terms of $S$. For two paths $\\pi$ and $\\pi'$, define a functional $\\theta(\\pi, \\pi')$ to \nbe the number of $S$-components where $S = E(\\pi) \\cap E(\\pi')$. As $\\pi$ and $\\pi'$ are self-avoiding, $\\theta(\\pi, \\pi')$ is basically the number of maximal segments shared between $\\pi$ and \n$\\pi'$. We refer the readers to Figure \\ref{fig:fig0} for an illustration.\n\n\\begin{figure}[!ht]\n \\centering\\includegraphics[width=0.8\\textwidth, keepaspectratio]{component.png}\n \\caption{{\\bf Components of the set of edges common to two paths.} In this figure the sequences \nof vertices $v_1, v_2, v_3, v_4, v_5, v_6, v_7, v_8, v_9$ and $v_1', v_2, v_3', v_3, v_4, v_5', \nv_6, v_7, v_8, v_9'$ define the paths $\\pi$ and $\\pi'$ respectively. The dark edges belong to $S \n= E(\\pi) \\cap E(\\pi')$. Here $\\theta(\\pi, \\pi') = 2$ with the segments $(v_3, v_4)$ and \n$(v_6, v_7, v_8)$ being the two $S$-components.}\n\\label{fig:fig0}\n\\end{figure}\n\nThe following result (\\cite[Lemma 2.9]{Ding13}) relates cardinality of $V(S)$, the union of all \nendpoints of edges in $S = E(\\pi) \\cap E(\\pi')$, to $\\theta(\\pi, \\pi')$ and $|S|$.\n\n$$ |V(S)| = |S| + \\theta(\\pi, \\pi')\\,.$$\n\nThe pair $\\big(\\theta(\\pi, \\pi'), |E(\\pi) \\cap E(\\pi')|\\big)$ turns out to be sufficient for \nbounding the correlation between $\\mathbf{1}_{G_{\\pi}}$ and $\\mathbf{1}_{G_{\\pi'}}$ from above. \nConsequently it makes sense to partition $\\Pi_\\ell^*$ based on the value of this pair. More \nformally for $\\pi \\in \\Pi_\\ell^*$ and integers $i \\leq j$, define the set $A_{i,j}$ as\n\\begin{equation} \\label{eq-def-A-i-j}\nA_{i,j} \\equiv A_{i,j}(\\pi) = \\{\\pi' \\in \\Pi_\\ell^*: \\theta(\\pi,\\pi') = i, |E(\\pi)\\cap E(\\pi')| = \nj\\}.\n\\end{equation}\nWe need a number of lemmas from \\cite{Ding13}. \n\\begin{lemma}\n\\label{up_bnd_lemma}(\\cite[Lemma 2.10]{Ding13})\nFor any $1 \\leq \\ell \\leq n_*$ and any $\\pi \\in \\Pi_\\ell^*$, we have that \nfor any positive integers $i \\leq j$\n\\begin{eqnarray*}\n\\label{up_bnd}|A_{i,j}(\\pi)| \\leq \\tbinom{\\ell + 1}{2i} \\tbinom{n_* - i - j}{\\ell + 1 - i - \nj}2^i(\\ell + 1 -j)!\\leq \\ell^{3i}n_*^{\\ell + 1 - i - j}.\n\\end{eqnarray*}\n\\end{lemma}\n\n\\begin{lemma}( \\cite[Lemma 2.3]{Ding13})\n\\label{cond_dev1}\nLet $Z_i$ be i.i.d.\\ exponential variables with mean $\\theta > 0$ for $1 \\leq i \\leq \\ell$. For \n$1\/4 \\leq \\rho \\leq 4$, consider the variable\n\\begin{equation}\n\\label{max_dev_new}\nM_\\ell = \\sup_{1 \\leq k \\leq \\ell}\\mid\\mbox{$\\sum_{i=1}^k$} Z_i - \\rho k \\mid.\n\\end{equation}\nThen there exist absolute constants $c^*$, $C^* > 0$ such that for all $r \\geq 1$ and $\\ell \\geq \nr^2$,\n\\begin{equation*}\n\\mathrm{e}^{-C^*\\ell\/r^2} \\leq \\mathbb{P}(M_\\ell \\leq r \\mid \\mbox{$\\sum_{i=1}^\\ell$} Z_i = \\rho \n\\ell ) \\leq \\mathrm{e}^{-c^*\\ell\/r^2}\\,.\n\\end{equation*}\n\\end{lemma}\n\n\\begin{lemma}\\cite[Lemma 3.2]{Ding13}\n\\label{cond_dev}\nLet $Z_i$ be i.i.d.\\ exponential variables with mean $\\theta > 0$ for $i \\in \\mathbb{N}$. Consider \n$1 \\leq r \\leq \\sqrt{\\ell}$ and the integer intervals $[a_1, b_1], [a_2, b_2], \\cdots, [a_m, b_m]$ \nsuch that $1 \\leq a_1 \\leq b_1 \\leq a_2 \\leq \\cdots \\leq a_m \\leq b_m \\leq \\ell$ and $q = \n\\sum_{i=1}^m(b_i - a_i + 1) \\leq \\ell - 1$. Let $1\/4 \\leq \\rho \\leq 1$ and $M_\\ell$ be defined as \nin the previous lemma. Also write $A = \\cup_{i = 1}^m [a_i,b_i] \\cap \\mathbb{N}$ and $p_\\ell \n= \\mathbb{P}(M_\\ell \\leq r | \\sum_{i=1}^\\ell Z_i = \\rho \\ell)$. Then for all $z_j$ such that\n$$\\mbox{$\\sum_{j = a_i}^{b_i}$} z_j - \\rho(b_i - a_i + 1) \\leq 2r\\,,$$\nwe have \n\\begin{equation}\n\\label{cond_dev_ineq}\n\\mathbb{P}(M_\\ell \\leq r | \\mbox{$\\sum_{i = 1}^{\\ell}$}Z_i = \\rho \\ell, Z_j = z_j \\mbox{ for all \n}j \\in A) \\leq C_3r\\sqrt{q \\wedge (\\ell - q)} p_\\ell 10^{100mr}\\mathrm{e}^{C^*q\/r^2}\\,,\n\\end{equation}\nwhere $C^*$ is the constant from Lemma~\\ref{cond_dev1} and $C_3 > 0$ is an absolute constant.\n\\end{lemma}\n\\begin{remark}\n(1) Notice that the bounds in Lemma~\\ref{cond_dev1} and \\ref{cond_dev} do not depend on the \nparticular mean of $Z_i$'s due to Lemma~\\ref{Beta}. (2) Although the bounds on $p_\\ell$ in \nLemma~\\ref{cond_dev1} do not contain any $\\rho$ (as it was restricted to a bounded interval), \n$p_\\ell$ actually depends on $r$ only through the ratio $r\/\\rho$. This follows from an application \nof Lemma~\\ref{Beta} with little manipulation. (3) Lemma~\\ref{cond_dev} is same as Lemma 3.2. in \n\\cite{Ding13} except that in the latter $q$ is restricted to be at most $\\ell - 10r$. But we can \neasily extend this to all $q \\leq \\ell - 1$. To see this assume $\\ell - 1 \\geq q \\geq \\ell - 10r$. \nThen the right hand side in \\eqref{cond_dev_ineq} becomes at least $C_3 p_\\ell \n\\mathrm{e}^{C^*\\ell\/r^2}\\mathrm{e}^{-10C^*\/r}$. Now from Lemma~\\ref{cond_dev1} we get $p_\\ell \n\\mathrm{e}^{C^*\\ell\/r^2} \\geq 1$. So the right hand side in \\eqref{cond_dev_ineq} is bigger than \n$C_3\\mathrm{e}^{-10C^*}$ whenever $\\ell - 1 \\geq q \\geq \\ell - 10r$. Increasing $C_3$ if necessary \nwe can make this number bigger than 1 and thus Lemma~\\ref{cond_dev} follows.\n\\end{remark}\nBy second moment computation, we can hope to show that $N^*_\\ell \\sim \\mathbb E N^*_\\ell$ with high \nprobability. Then the main challenge is to prove that a large fraction of the good paths are \nmutually vertex-disjoint with high probability. To this end, we consider a graph $\\mathcal{G}_n$ \nwhere each vertex corresponds to a good path in $\\mathcal W_n^*$ and an edge is present whenever \nthe corresponding paths intersect at one vertex at least. Thus the presence of a large number \nof vertex disjoint good paths in $\\mathcal W^*_n$ is equivalent to the existence of a large \nindependent subset (i.e., a subset that has no edge among them) in the graph $\\mathcal G_n$. The \nfollowing simple lemma is sometimes referred to as Tur\\'{a}n's theorem, and can be proved simply \nby employing a greedy algorithm (see, e.g., \\cite{Erdos70}).\n\\begin{lemma}\n\\label{relation}\nLet $G = (V,E)$ be a finite, simple graph with $V \\neq \\emptyset$. Then $G$ contains an \nindependent subset of size at least $|V|^2 \/ (2|E| + |V|)$. Notice $2|E|$ is the total degree of \nvertices in $G$.\n\\end{lemma}\nIn light of Lemma~\\ref{relation}, we wish to show that with high probability the total degree of \nvertices in $\\mathcal{G}_n$ is not big relative to $|V(\\mathcal{G}_n)|$.\nFor this purpose, it is desirable to show that the typical number of good paths that intersect \nwith a fixed good path $\\pi \\in \\Pi_\\ell^*$ is not big. Thus, we need to estimate $\\sum_{\\pi' \\in \n\\Pi_{\\ell,\\pi}^*}\\mathbb{P}(G_{\\pi'}|G_{\\pi})$ where $\\Pi_{\\ell,\\pi}^*$ is the collection of all \npaths $\\pi'$ in $\\mathcal W_n^*$ sharing at least one vertex with $\\pi$. Drawing upon the \ndiscussions preceding \\eqref{eq-def-A-i-j}, we will first estimate $\\mathbb{P}(G_{\\pi'}|G_{\\pi})$ \nfor a specific value of the pair $(\\theta(\\pi, \\pi'), |E(\\pi)\\cap E(\\pi')|)$. Our next lemma is \nvery similar to Lemma~3.3 in \\cite{Ding13}.\n\\begin{lemma}\n\\label{conditional_prob}\nLet $\\pi \\in \\Pi_\\ell^*$ and $\\pi' \\in A_{i,j}$ with $1 \\leq i \\leq j \\leq \\ell$. Then there exist \nabsolute constants $\\eta_1, C_4>0$ such that for $0 < \\eta < \\eta_1$, $\\zeta_2 > 1 \\vee \n\\sqrt{2C^*\/\\mathrm{e}}$ and $\\ell \\geq \\zeta_2^2\/\\eta$ we have\n\\begin{equation}\n\\label{cond_est}\n\\mathbb{P}(G_{\\pi'}|G_{\\pi}) \\leq C_4(1 + o(1))\\mathbb{P}(G_{\\pi})n^j\\sqrt{\\ell\/\\eta} \n\\mathrm{e}^{-j\\eta}\\mathrm{e}^{1000\\zeta_2 i\/\\sqrt{\\eta}}\\,.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nDenote by $S$ and $S'$ the sets $E(\\pi)\\cap E(\\pi')$ and $E(\\pi')\\setminus E(\\pi)$ respectively. \nBy standard calculus, there exists $0<\\eta_1 \\leq 1$ such that $1 + \\mathrm{e}\\eta \\geq \n\\mathrm{e}^{(1 + \\mathrm{e}\/2)\\eta}$ for all $0 < \\eta < \\eta_1$. Note that $\\mathbb{P}(G_{\\pi'} \n\\mid G_{\\pi}) = \\mathfrak p_1 \\cdot \\mathfrak p_2$, where\n\n\n\\begin{eqnarray*}\n\\mathfrak p_1 &=& \\mathbb{P}(\\lambda \\ell - 1 \\leq W(\\pi') \\leq \\lambda \\ell\\mbox{ }|\\mbox{ \n}G_{\\pi})\\,, \\\\\n \\mathfrak p_2 &=& \\mathbb{P}\\big(M(\\pi') \\leq (\\zeta_2 \/ \\sqrt{\\eta}).\n (W(\\pi')\/\\lambda \\ell)\\mbox{ }|\\mbox{ }G_{\\pi}, \\lambda \\ell - 1 \\leq W(\\pi') \n \\leq \\lambda \\ell\\big)\\,.\n\\label{decomposition}\n\\end{eqnarray*}\n\nSince the maximum deviation of a good path from its linear interpolation between starting and \nending edges is at most $\\zeta_2\/\\sqrt{\\eta}$, the weight of an $S$-component, say $s$, is at \nleast $W(\\pi)|s| \/ \\ell - 2\\zeta_2\/\\sqrt{\\eta}$ when $\\pi$ is good. Here $|s|$ denotes the number \nof edges in $s$. Adding over all the $\\theta(\\pi, \\pi')$ components of $S$ we get that\n$\\mbox{$\\sum_{e\\in S}$}W_e \\geq W(\\pi)|S| \/ \\ell - 2\\theta(\\pi, \\pi')\\zeta_2\/\\sqrt{\\eta} \\mbox{ \non }G_\\pi$. As $\\pi' \\in A_{i, j}$ and weight of a good path is at least $\\lambda \\ell - 1$, the \nprevious inequality implies that on $G_\\pi$,\n$$\\mbox{$\\sum_{e\\in S}$}W_e \\geq \\lambda j - 1 - 2i\\zeta_2\/\\sqrt{\\eta}\\,.$$\nConsequently when $j \\leq \\ell - 1$,\n\\begin{eqnarray}\n\\mathfrak p_1&\\leq & \\mathbb{P}(\\mbox{$\\sum_{e\\in S'}$}W_e \\leq \\lambda|S'| + 1 + 2i \n\\zeta_2\/\\sqrt{\\eta} \\mbox{ }|\\mbox{ }G_{\\pi})\\\\\n&=& \\mathbb{P}\\big(\\mathrm{Gamma}(\\ell-j, 1\/n) \\leq \\lambda(\\ell - j) + 1 + 2i \n\\zeta_2\/\\sqrt{\\eta}\\big) \\nonumber\\\\\n&\\leq & C_4' n^{-(\\ell-j)}(\\ell-j)^{-1\/2}(1 + \\mathrm{e}\\eta)^{\\ell-j} \\mathrm{e}^{2i \n\\mathrm{e}\\zeta_2 \/ \\sqrt{\\eta}(1 + \\mathrm{e}\\eta)}\\,, \\label{eq-p-1}\n\\end{eqnarray}\nwhere $C_4'>0$ is an absolute constant and the last inequality used \\eqref{gamma_density}. For the \nsecond term in the right hand side of \\eqref{decomposition}, we can apply \\eqref{cond_Property} \nand Lemma~\\ref{cond_dev} to obtain\n\\begin{eqnarray}\n\\mathfrak p_2 \\leq C_3\\mathbb{P}\\big(M(\\pi) \\leq \\zeta_2\/\\sqrt{\\eta}\\mbox{ }|\\mbox{ \n}W(\\pi)=\\lambda l\\big)\\sqrt{j\\wedge (\\ell-j)\/\\eta} 10^{100i\\zeta_2 \/ \n\\sqrt{\\eta}}\\mathrm{e}^{C^*j\\eta\/\\zeta_2^2} \\,, \\label{cond_prob1}\n\\end{eqnarray}\nwhen $j \\leq \\ell - 1$ and $\\ell \\geq \\zeta_2^2 \/ \\eta$ (see the conditions in \nLemma~\\ref{cond_dev}). Using \\eqref{cond_Property} again, we get that\n\\begin{eqnarray*}\n\\mathbb{P}\\big(M(\\pi) \\leq \\zeta_2 \/ \\sqrt{\\eta}\\mbox{ }|\\mbox{ }W(\\pi)=\\lambda \\ell\\big) \n& = & \\mathbb{P}\\big(M(\\pi) \\leq (\\zeta_2 \/ \\sqrt{\\eta}).(W(\\pi)\/\\lambda \\ell)\\mbox{ }|\\lambda \n\\ell - 1 \\leq W(\\pi) \\leq \\lambda \\ell\\big) \\\\ \n& = & \\mathbb{P}(G_{\\pi})\/\\mathbb{P}(\\lambda \\ell - 1 \\leq W(\\pi) \\leq \\lambda \\ell)\\\\\n& = & \\mathbb{P}(G_{\\pi}) \/ \\mathbb{P}\\big(\\lambda \\ell - 1 \\leq \\mathrm{Gamma}(\\ell, 1\/n)\\leq \n\\lambda \\ell\\big)\\\\\n&\\leq & C_4''(1 + o(1))\\mathbb{P}(G_{\\pi})\\ell!\\big(n \/ \\lambda \\ell\\big)^\\ell \\,,\n\\end{eqnarray*}\nwhere $C_4'' > 0$ is an absolute constant and the last inequality follows from \n\\eqref{gamma_density}.\nPlugging the preceding inequality into \\eqref{cond_prob1} and using the fact $\\ell! \\leq \n\\mathrm{e}\\sqrt{\\ell}(\\ell\/\\mathrm{e})^{\\ell}$ (Stirling's approximation)\n$$\\mathfrak p_2 \\leq \\mathrm{e}C_3C_4''(1 + o(1))\\mathbb{P}(G_\\pi)n^\\ell \\sqrt{\\ell(\\ell - \nj)\/\\eta}(1 + \\mathrm{e}\\eta)^{-\\ell}10^{100i\\zeta_2 \/ \n\\sqrt{\\eta}}\\mathrm{e}^{C^*j\\eta\/\\zeta_2^2}\\,.$$\nCombined with \\eqref{eq-p-1}, it yields that\n\\begin{eqnarray*}\n\\mathbb{P}(G_{\\pi'}|G_{\\pi}) &\\leq & \\mathrm{e}C_3C_4' C_4''\\zeta_2(1 + o(1))\\mathbb{P}\n(G_{\\pi})\\sqrt{\\ell \/ \\eta} n^j (1 + e \\eta)^{-j} 10^{100i\\zeta_2 \/ \n\\sqrt{\\eta}}\\mathrm{e}^{C^*j\\eta\/\\zeta_2^2}\\mathrm{e}^{2ie\\zeta_2 \/ \\sqrt{\\eta}(1 + e\\eta)} \\,.\n\\end{eqnarray*}\nSince $\\zeta_2 \\geq \\sqrt{2C^*\/e}$ and $\\eta < \\eta_1$ we have\n\\begin{eqnarray*}\n\\mathbb{P}(G_{\\pi'}|G_{\\pi}) &\\leq & \\mathrm{e}C_3C_4' C_4''(1 + o(1))\\mathbb{P}\n(G_{\\pi})n^j\\sqrt{\\ell\/\\eta} \\mathrm{e}^{-j\\eta}\\mathrm{e}^{1000\\zeta_2 i\/\\sqrt{\\eta}}\n\\end{eqnarray*}\nprovided $j \\leq \\ell - 1$. The case $j = \\ell$ can also be easily accommodated. To this end let \nus first compute $\\mathbb{P}(G_{\\pi})$. It follows from \\eqref{gamma_density} and \nLemma~\\ref{cond_dev1} that\n\\begin{equation*}\n\\mathbb{P}(G_{\\pi}) \\geq (1 + o(1))(1 - \\mathrm{e}^{-1\/\\lambda})(\\lambda \\ell \/ n)^{\\ell} (1 \/ \n\\ell!) \\mathrm{e}^{-C^* \\ell \\eta \/ \\zeta_2^2}\\,.\n\\end{equation*}\n Applying Stirling's formula again, we get that for $\\zeta_2 \\geq \\sqrt{2C^*\/e}$ and $\\eta < \n \\eta_1$, \\begin{equation*}\n\\mathbb{P}(G_{\\pi}) \\geq C_4'''(1 + o(1))n^{-\\ell}\\ell^{-1\/2}\\mathrm{e}^{\\ell\\eta}\\,,\n\\end{equation*}\nfor an absolute constant $C_4'''>0$. Hence, with the choice of $C_4 = 1\/C_4''' \\vee \n\\mathrm{e}C_3C_4' C_4''$ the right hand side of \\eqref{cond_est} is at least 1, and thus \n\\eqref{cond_est} holds in this case.\n\\end{proof}\nArmed with Lemma~\\ref{conditional_prob}, we can now obtain an upper bound on $\\sum_{\\pi' \\in \n\\Pi^*_{\\ell,\\pi}}\\mathbb{P}(G_{\\pi'}|G_{\\pi})$. Similarly we can bound $\\sum_{\\pi' \\in \n\\Pi^*_\\ell}\\mathbb{P}(G_{\\pi'}|G_{\\pi})$ which is useful for the computation of $\\mathbb E((N_\\ell^*)^2)$ \nin view of the following simple observation:\n\n\\begin{equation}\n\\label{sec_mom_cond}\n\\mathbb E ((N_\\ell^*)^2) = \\mbox{$\\sum_{\\pi \\in \\Pi_\\ell^*}$}\\mathbb{P}(G_\\pi)\\mbox{$\\sum_{\\pi' \\in \n\\Pi^*_{\\ell}}$}\\mathbb{P}(G_{\\pi'}|G_{\\pi}) = \\mathbb E(N_\\ell^*)\\mbox{$\\sum_{\\pi' \\in \n\\Pi^*_{\\ell}}$}\\mathbb{P}(G_{\\pi'}|G_{\\pi})\\,,\n\\end{equation}\nwhere the last equality follows from the fact that $\\sum_{\\pi' \\in \n\\Pi^*_{\\ell}}\\mathbb{P}(G_{\\pi'}|G_{\\pi})$ is independent of $\\pi$.\n\n\\begin{lemma}\n\\label{lemma_crucial1}\nLet $0 < \\zeta_1 < 1\/4$ and let $\\zeta_2, \\ell, \\eta$ satisfy the same conditions as stated in \nLemma~\\ref{conditional_prob}. Then there exists an absolute constant $C_5 > 0$ such that,\n\\begin{align}\n&\\mbox{$\\sum_{\\pi' \\in \\Pi^*_{\\ell,\\pi}}$}\\mathbb{P}(G_{\\pi'}|G_{\\pi}) \\leq C_5(1 + \no(1))\\mathrm{e}^{1000\\zeta_2\/ \\sqrt{\\eta}}\\sqrt{\\ell^7\/\\eta}\\tfrac{\\mathbb{E} N^*_\\ell}{n}\\,, \n\\label{sum1}\\\\\n&\\mbox{$\\sum_{\\pi' \\in \\Pi^*_{\\ell}}$}\\mathbb{P}(G_{\\pi'}|G_{\\pi}) \\leq (1 + o(1)) \n\\mathbb{E}N^*_\\ell \\,. \\label{sum2}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\n\nBy Lemmas \\ref{conditional_prob} and \\ref{up_bnd_lemma}, we get that for $1 \\leq i \\leq j \\leq \n\\ell$,\n\\begin{eqnarray}\n\\mbox{$\\sum_{\\pi' \\in A_{i,j}} $}\\mathbb{P}(G_{\\pi'}|G_{\\pi}) &\\leq& (1 + o(1))n_*^{\\ell + \n1}\\mathbb{P}(G_{\\pi})\\tfrac{\\xi(\\eta,\\ell,i,j, \\zeta_1)}{n^i} \\leq \n (1 + \n o(1))\\mathbb{E}N^*_\\ell\\tfrac{\\xi(\\eta,\n \\ell,i,j, \\zeta_1)}\n {n^i}\\label{secstage1}\\,,\n\\end{eqnarray}\nwhere $\\xi(\\eta,\\ell,i,j, \\zeta_1)$ is a number depending only on $(\\eta, \n\\ell, i, j, \\zeta_1)$ (so in particular, $\\xi(\\eta,\\ell,i,j, \\zeta_1)$ does not depend \non $n$). It is also clear that \n$$\\mbox{$\\sum_{\\pi' \\in A_{0, 0}} $}\\mathbb{P}(G_{\\pi'}|G_{\\pi}) \\leq \\mbox{$\\sum_{\\pi' \\in A_{0, \n0}} $}\\mathbb{P}(G_{\\pi'}) \\leq \\mathbb E N^*_\\ell\\,.$$\nCombined with \\eqref{secstage1}, it yields \\eqref{sum2}. It remains to prove \\eqref{sum1}. To this \nend, we note that the major contribution to the term $\\sum_{\\pi' \\in \\Pi^*_{\\ell,\\pi}} \\mathbb{P}\n(G_{\\pi'}|G_{\\pi})$ comes from those paths $\\pi'$ with $\\theta(\\pi,\\pi') = 1$ or $|V(\\pi') \\cap \nV(\\pi)| = 1$. Thus, we revisit \\eqref{secstage1} for the case of $i = 1$. By Lemmas \n\\ref{conditional_prob} and \\ref{up_bnd_lemma} again, we get that\n\\begin{eqnarray}\n\\mbox{$\\sum_{1 \\leq j \\leq \\ell} \\sum_{A_{1,j}}$}\\mathbb{P}(G_{\\pi'} | G_{\\pi}) \n&\\leq & 2C_4 (1 + o(1))\\mathrm{e}^{1000\\zeta_2 \/ \n\\sqrt{\\eta}}\\sqrt{\\ell^{7}\/\\eta}n_*^{\\ell+1}\\mathbb{P}(G_{\\pi}) n^{-1} \\mbox{$\\sum_{1\\leq j \\leq \n\\ell}$} \\mathrm{e}^{-j\\eta}(1 - \\zeta_1 \\eta)^{-j} \\nonumber\\\\\n& \\leq & 2C_4 (1 + o(1))\\mathrm{e}^{1000\\zeta_2 \/ \\sqrt{\\eta}}\\sqrt{\\ell^{7}\/\\eta} \n\\mathbb{E}N^*_\\ell (n(1-\\mathrm{e}^{-\\frac{\\eta}{2}}))^{-1} \\nonumber \\\\%\\mbox{\\hspace{1 cm}$[\\because \\zeta_1 \\leq 1\/4$]}\\\\\n&\\leq & 8C_4 (1 + o(1))\\mathrm{e}^{1000\\zeta_2 \/ \n\\sqrt{\\eta}}\\sqrt{\\ell^{7}\/\\eta^3}\\mathbb{E}N^*_\\ell n^{-1}\\,, \\label{eq-A-1-j}\n\\end{eqnarray}\nwhere the last two inequalities follow from the facts that $\\zeta_1 < 1\/4$ and $\\mathrm{e}^{-\\eta \n\/ 2} \\leq 1 - \\eta\/4$ whenever $0 < \\eta < 1$. \nWe still need to consider paths that share vertices with $\\pi$ but no edges. For $1\\leq i\\leq \n\\ell$, define $B_i$ to be the collection of paths which shares $i$ vertices with $\\pi$ but no \nedges, i.e., \n\\begin{equation*}\nB_i = \\{\\pi' \\in \\Pi_\\ell^* : |V(\\pi') \\cap V(\\pi)| = i, E(\\pi') \\cap E(\\pi) = \\emptyset\\}\\,.\n\\end{equation*}\nWe need an upper bound on the size of $B_i$. To this end notice that there are $\\tbinom{\\ell + 1}\n{i}$ many choices for $V(\\pi') \\cap V(\\pi)$ as cardinality of the latter is $i$ and these vertices \ncan be placed along $\\pi'$ in at most $\\tbinom{\\ell + 1}{i}i!$ many different ways. Also the \nnumber of ways we can choose the remaining $\\ell + 1 - i$ vertices is at most $n_*^{\\ell + 1 - \ni}$. Multiplying these numbers we get\n$$|B_i| \\leq \\tbinom{\\ell + 1}{i}^2i!n_*^{\\ell + 1 - i}\\,.$$\nSince the edge sets are disjoint, $\\mathbb{P}(G_{\\pi'}|G_{\\pi}) = \\mathbb{P}(G_{\\pi})$ for all \n$\\pi' \\in B_i$ and $1\\leq i \\leq \\ell$. So we have\n\\begin{equation}\n\\mbox{$\\sum_{\\pi' \\in B_{i}} $}\\mathbb{P}(G_{\\pi'}|G_{\\pi}) \\leq (1+o(1)) \\tbinom{\\ell + 1}{i}^2i!\n(1 - \\zeta_1\\eta)^{-i}\\tfrac{\\mathbb{E}N^*_\\ell}{n^i} \\leq (8+o(1)) \n\\ell^2\\tfrac{\\mathbb{E}N^*_\\ell}{n} \\,.\\label{secstage2}\n\\end{equation}\nCombined with \\eqref{eq-A-1-j}, it completes the proof of \\eqref{sum1}.\n\\end{proof}\nWe will now proceed with our plan of finding a large independent subset of $\\mathcal{G}_n$. For \nany two paths $\\pi$ and $\\pi'$ in $\\Pi^*_{\\ell}$, define an event \n$$H_{\\pi,\\pi'} = H_{\\pi,\\pi'; \\eta, \\zeta_2} = \\begin{cases}\nG_{\\pi} \\cap G_{\\pi'} &\\mbox{ if } V(\\pi) \\cap V(\\pi') \\neq \\emptyset\\,,\\\\\n\\emptyset, &\\mbox{ otherwise}\\,.\n\\end{cases}$$\nWriting $N_{\\ell}'= N_{\\ell; \\eta, \\zeta_1, \\zeta_2}' = \\sum_{\\pi, \\pi' \\in \n\\Pi_{\\ell}^*}\\mathbf{1}_{H_{\\pi,\\pi'}}$, we see that $N'_\\ell = 2 |E(\\mathcal{G}_n)| + |\nV(\\mathcal{G}_n)|$. Also notice that $N^*_\\ell = |V(\\mathcal{G}_n)|$. As an immediate consequence \nof Lemma~\\ref{lemma_crucial1}, we can compute an upper bound of $\\mathbb{E}N_{\\ell}'$ as follows:\n\\begin{eqnarray}\n\\mathbb{E}N_{\\ell}' = \\mbox{$\\sum_{\\pi \\in \\Pi^*_\\ell} $} \\mathbb{P}(G_\\pi) \\mbox{$\\sum_{\\pi' \n\\in \\Pi^*_{\\ell,\\pi}}$}\\mathbb{P}(G_{\\pi'}|G_{\\pi}) \\leq C_5(1 + \no(1))\\mathrm{e}^{1000\\zeta_2\/\\sqrt{\\eta}}\\sqrt{\\ell^{7}\/\\eta^3}\\tfrac{(\\mathbb{E}N^*_{\\ell})^2} \n{n}.\\label{expected_edge}\n\\end{eqnarray}\nIf $N_\\ell^*$ and $N_\\ell'$ are concentrated around their respective means in the sense that \n$N_\\ell^* = \\mathbb E N_\\ell^*(1 + o(1))$ and $N_\\ell' = \\mathbb E N_\\ell'(1 + o(1))$ with high probability, \nthen we can use Lemma~\\ref{relation} and \\eqref{expected_edge} to derive a lower bound on the size \nof a maximum independent subset of $\\mathcal G_n$. For this purpose, it suffices to show that \n$\\mathbb{E}((N_\\ell^*)^2) = (\\mathbb{E}N_\\ell^*)^2(1 + o(1))$ and $\\mathbb{E}((N_\\ell')^2) = \n(\\mathbb{E}N_\\ell')^2(1 + o(1))$. The former has already been addressed by \\eqref{sum2} (see \n\\eqref{sec_mom_cond}). For the latter we need to estimate contributions from terms like \n$\\mathbb{P}(H_{\\pi_1,\\pi_2} \\cap H_{\\pi_3,\\pi_4})$ in the second moment calculation for $N_\\ell'$. \nOur next lemma will be useful for this purpose.\n\\begin{lemma}\n\\label{count_vertices}\nLet $\\pi_1, \\pi_2, \\pi_3, \\pi_4$ be paths in $\\Pi_\\ell^*$ such that $|E(\\pi_3 \\cup \\pi_4)| = \n2\\ell - j$ and $|E(\\pi_1 \\cup \\pi_2) \\cap E(\\pi_3 \\cup \\pi_4)| = j'$ where $0 \\leq j \\leq \\ell$ \nand $1 \\leq j' \\leq 2\\ell - j$. Also assume that $V(\\pi_3 \\cap \\pi_4) \\neq \\emptyset$. Then,\n\\begin{equation}\n\\label{vertex_count}\n|V(\\pi_3) \\cap V(\\pi_4)| + |V(\\pi_3 \\cup \\pi_4) \\cap V(\\pi_1 \\cup \\pi_2)| \\geq j + j' +2\\,.\n\\end{equation}\n\\end{lemma}\n\\begin{figure}[!ht]\n \\centering\\includegraphics[width=0.8\\textwidth]{vertex_count.png}\n \\caption{{\\bf Removing edges from union of two paths.} In these figures the sequences of vertices \n $v_1, v_2, v_3, v_4, v_5, v_6, v_7, v_8, v_9$ and $v_1', v_2, v_3, v_4, v_5', v_6, \n v_7, v_8, v_9'$ define the paths $\\pi_4$ and $\\pi_3$ respectively. $C^O_1$ and $C^O_2$ are the \n two connected components of $\\pi_3 \\cap \\pi_4$. In the figure at the top, the vertices $v_4, \n v_5, v_6, v_5'$ define a cycle. After removing the edge $(v_4, v_5)$ from the only segment in \n $E(\\pi_4) \\setminus E(\\pi_3)$ between $C^O_1$ and $C^O_2$, we get an acyclic graph displayed at \n the bottom.}\n \\label{fig:fig2}\n\\end{figure}\n\\begin{proof}\nSuppose the graph $\\pi_3 \\cap \\pi_4$ has exactly $k + 1$ (connected) components namely $C^O_1, \n\\cdots, C^O_{k+1}$. Notice that $k$ is nonnegative as $\\pi_3 \\cap \\pi_4 \\neq \\emptyset$. Since $|\nE(\\pi_3 \\cap \\pi_4)| = j$ and $\\pi_3 \\cap \\pi_4$ is acyclic with $k + 1$ components, we have that \n$|V(\\pi_3 \\cap \\pi_4)| = j + k + 1$. Now suppose it were shown that $\\pi_3 \\cup \\pi_4$ can be made \nacyclic by removing at most $k$ edges while keeping the vertex set same and call this new graph as \n$H$. One would then have,\n\\begin{equation*}\n|V\\big(H \\cap (\\pi_1 \\cup \\pi_2)\\big)| \\geq |E\\big(H \\cap (\\pi_1 \\cup \\pi_2)\\big)| + 1 \\geq |\nE\\big((\\pi_3 \\cup \\pi_4) \\cap (\\pi_1 \\cup \\pi_2)\\big)| - k + 1 = j' - k + 1\\,.\n\\end{equation*}\nAdding this to $|V(\\pi_3 \\cap \\pi_4)| = j + k + 1$ would immediately give \\eqref{vertex_count}. In \nthe remaining part of this proof we will show that one can remove $k$ edges from $\\pi_3 \\cup \n\\pi_4$ so that the resulting graph becomes acyclic.\\par\n\nLet $\\mathcal C$ be a cycle in $\\pi_3 \\cup \\pi_4$. Since $\\pi_3$ and $\\pi_4$ are acyclic, \n$\\mathcal C$ consists of an alternating sequence of segments in $E(\\pi_4) \\setminus E(\\pi_3)$ \nand $E(\\pi_3) \\setminus E(\\pi_4)$ interspersed with segments in any one of the $C^O_i$'s (possibly \ntrivial i.e.~consisting of a single vertex). This implies that for some $1 \\leq i, i' \\leq \nk+1$, $\\mathcal C$ contains a (nontrivial i.e.~of positive length) segment in $E(\\pi_4) \\setminus \nE(\\pi_3)$ joining $C^O_i$ and $C^O_{i'}$. In fact $i \\neq i'$ since $\\pi_4$ is acyclic. Hence the \nonly case we need to consider is when $k \\geq 1$. As $\\pi_4$ is a path, $C^O_1, C^O_2, \\cdots, \nC^O_{k+1}$'s are vertex-disjoint segments (possibly trivial) aligned along $\\pi_4$ in some order \nwith $k$ intervening (nontrivial) segments in $E(\\pi_4) \\setminus E(\\pi_3)$. Pick one edge from \neach of these $k$ segments. It follows from the discussions so far that $\\mathcal C$ must contain \none of these edges. Consequently removing these $k$ edges from $\\pi_3 \\cup \\pi_4$ would make the \nresulting graph acyclic. We refer the readers to Figure \\ref{fig:fig2} for an illustration.\n\\end{proof}\nWe will now use \\eqref{vertex_count} and Lemma~\\ref{lemma_crucial1} to show that $N^*_\\ell$ and \n$N_\\ell'$ concentrate around their expected values.\n\\begin{lemma}\n\\label{lemma_crucial3}\nAssume the same conditions on $\\zeta_1, \\zeta_2$, $\\ell$ and $\\eta$ as in \nLemma~\\ref{lemma_crucial1}. Then there exists $g_{\\ell, \\eta} = g_{\\ell, \\eta; \\zeta_1, \\zeta_2}: \n\\mathbb N \\mapsto [0, \\infty)$ depending on $\\ell, \\eta$ (and $\\zeta_1, \\zeta_2$) with $g_{\\ell, \n\\eta}(n)\\to 0$ as $n\\to \\infty$ such that the \nfollowing hold:\\\\\n(1) $\\mathbb{P}\\big(|N^*_\\ell - \\mathbb{E} N^*_\\ell| \\leq g_{\\ell,\\eta}(n) \n\\mathbb{E}N^*_\\ell\\big) \\to 1$ as $n \\to \\infty$;\\\\\n(2) $\\mathbb{P}\\big(|N_\\ell' - \\mathbb{E} N_\\ell'| \\leq g_{\\ell,\\eta}(n)\\mathbb{E} N_\\ell'\\big) \n\\to 1$ as $n \\to \\infty$.\n\\end{lemma}\n\\begin{proof}\nThe proof of (1) is rather straightforward. By \\eqref{sec_mom_cond} and \\eqref{sum2} we see that\n\\begin{eqnarray*}\n\\mathbb{E}((N^*_\\ell)^2) \\leq (\\mathbb{E}N^*_\\ell)^2(1 + o(1))\\,.\n\\end{eqnarray*}\nAn application of Markov's inequality then yields Part (1). \nIn order to prove Part (2), we first argue that $\\mathbb{E}N_\\ell' = \\Theta(n)$. Similar to the \ncomputation of \\eqref{first_bnd}, we can show that $\\mathbb{E}N^*_\\ell$ is $O(n)$. But then \n\\eqref{expected_edge} tell us that same is also true for $\\mathbb{E}N_\\ell'$. For the lower bound, \nnotice that given any path $\\pi_1$ in $\\Pi_\\ell^*$, there are $\\Theta(n^{\\ell})$ many paths in \n$\\Pi_\\ell^*$ that intersect $\\pi_1$ in exactly one vertex. Furthermore for any such pair $(\\pi_1, \n\\pi_2)$ we have\n$$\\mathbb{P}(H_{\\pi_1,\\pi_2})= (\\mathbb{P}(G_{\\pi}))^2 = \\Theta(n^{-2\\ell}),$$ where the last \nequality follows from \\eqref{gamma_density} (see the computation in \\eqref{first_bnd}) and \nLemma~\\ref{cond_dev1}. Therefore, we obtain that\n\\begin{eqnarray*}\n\\mathbb{E}N_\\ell' = \\Theta(n^{\\ell + 1}) \\mbox{$\\sum_{\\pi_2 \\in \\Pi_{\\ell,\\pi_1}^*}$}\\mathbb{P}\n(G_{\\pi_1} \\cap G_{\\pi_2})\\geq \\Theta(n^{\\ell + 1}) \\Theta(n^{\\ell}) \\Theta(n^{-2\\ell})= \n\\Theta(n).\n\\end{eqnarray*}\nNext we estimate $\\mathbb E ((N_\\ell')^2)$. For this purpose, we first consider two fixed $\\pi_1, \n\\pi_2\\in \\Pi^*_\\ell$ such that $V(\\pi_1) \\cap V(\\pi_2) \\neq \\emptyset$. For $0 \\leq j \\leq \\ell$ \nand $1 \\leq j' \\leq 2\\ell - j$, let $\\Pi_{\\pi_1, \\pi_2}^{\\ell,j,j'}$ be the collection of all \npairs of paths $(\\pi_3, \\pi_4) \\in \\Pi_\\ell^*$ such that $|E(\\pi_1 \\cup \\pi_2) \\cap E(\\pi_3 \\cup \n\\pi_4)| = j'$ and $|E(\\pi_3 \\cup \\pi_4)| = 2\\ell - j$.\nFor $(\\pi_3, \\pi_4)\\in \\Pi_{\\pi_1, \\pi_2}^{\\ell,j,j'}$, we see that $|E(\\pi_3 \\cup \\pi_4) \n\\setminus E(\\pi_1 \\cup \\pi_2)| = 2\\ell - j - j'$ and thus by a similar reasoning as employed in \n\\eqref{cond_prob_order} we get\n$$\\mathbb{P}(H_{\\pi_3, \\pi_4}|H_{\\pi_1, \\pi_2}) = O(n^{j + j' - 2\\ell})\\,.$$ Now let $\\Pi_{\\pi_1, \n\\pi_2}^{\\ell,j,j'}(n_1, n_2) \\subseteq \\Pi_{\\pi_1, \\pi_2}^{\\ell,j,j'}$ contain all the pairs \n$(\\pi_3, \\pi_4)$ such that $|V(\\pi_3) \\cap V(\\pi_4)| = n_1 \\geq 1$ and $|V(\\pi_3 \\cup \\pi_4) \\cap \nV(\\pi_1 \\cup \\pi_2)| = n_2$. Then $|V(\\pi_3 \\cup \\pi_4) \\setminus V(\\pi_1 \\cup \\pi_2)| = 2\\ell + 2 \n- n_1 - n_2$ and consequently $|\\Pi_{\\pi_1, \\pi_2}^{\\ell,j,j'}(n_1, n_2)| = O(n^{2\\ell + 2 - n_1 - \nn_2})$.\nBy Lemma~\\ref{count_vertices}, we know that for $n_1 + n_2 \\geq j + j' +2$ for $(\\pi_3, \\pi_4)\\in \n\\Pi_{\\pi_1, \\pi_2}^{\\ell,j,j'}(n_1, n_2)$. Therefore, \n\\begin{equation*}\n\\mbox{$\\sum_{(\\pi_3, \\pi_4) \\in \\Pi_{\\pi_1, \\pi_2}^{\\ell,j,j'}}$}\\mathbb{P}(H_{\\pi_3, \\pi_4}|\nH_{\\pi_1, \\pi_2}) = \\mbox{$\\sum_{1\\leq n_1, n_2 \\leq 2\\ell+2} \\sum_{(\\pi_3, \\pi_4) \\in \\Pi_{\\pi_1, \n\\pi_2}^{\\ell,j,j'}(n_1, n_2)}$} \\mathbb{P}(H_{\\pi_3, \\pi_4}|H_{\\pi_1, \\pi_2}) = O(1)\\,.\n\\end{equation*}\nThis implies that\n$$ \\mbox{$ \\sum_{(\\pi_1, \\pi_2), (\\pi_3, \\pi_4)} $}\\mathbb P(H_{\\pi_1, \\pi_2} \\cap H_{\\pi_3, \n\\pi_4}) = O(1) \\mathbb E N'_\\ell\\,.$$\nwhere the sum is over all such pairs such that $ |E(\\pi_1 \\cup \\pi_2) \\cap E(\\pi_3 \\cup \\pi_4)| \n\\neq \\emptyset$. In addition, \n$$ \\mbox{$ \\sum_{(\\pi_1, \\pi_2), (\\pi_3, \\pi_4)} $}\\mathbb P(H_{\\pi_1, \\pi_2} \\cap H_{\\pi_3, \n\\pi_4}) = (1+o(1))( \\mathbb E N'_\\ell)^2\\,.$$\nwhere the sum is over all such pairs such that $ |E(\\pi_1 \\cup \\pi_2) \\cap E(\\pi_3 \\cup \\pi_4)| = \n\\emptyset$ (thus in this case $H_{(\\pi_1, \\pi_2)}$ is independent of $H_{(\\pi_3, \\pi_4)}$). \nCombined with the fact that $\\mathbb E N'_\\ell = \\Theta(n)$, it gives that $\\mathbb E ((N'_\\ell)^2) = (1+o(1)) \n(\\mathbb E N'_\\ell)^2$. At this point, another application of Markov's inequality completes the proof of \nthe lemma.\n\\end{proof}\nWe are now well-equipped to prove the main lemma of this subsection. For convenience of notation, \nwrite \n\\begin{equation}\\label{eq-def-f-l-eta}\nf(\\ell, \\eta) = f_{\\zeta_2}(\\ell, \\eta) = \\mathrm{e}^{-1000\\zeta_2\/\\sqrt{\\eta}}\\sqrt{\\eta^3 \/ \n\\ell^7}\\,.\n\\end{equation}\n\\begin{lemma}\n\\label{lemma_crucial2}\nAssume the same conditions on $\\zeta_1, \\zeta_2$, $\\ell$ and $\\eta$ as in \nLemma~\\ref{lemma_crucial1}. Let $S_{n, \\eta, \\ell} = S_{n, \\eta, \\ell; \\zeta_1, \\zeta_2}$ be a set \nwith maximum cardinality among all subsets of $\\Pi_\\ell^*$ containing only pairwise disjoint good \npaths. Then there exists an absolute constant $C_6>0$ such that, \n\\begin{equation}\n\\label{num_dis_good}\n\\mathbb{P}(|S_{n, \\eta, \\ell}| \\geq C_6f(\\ell, \\eta) n) \\to 1\\mbox{ as }n\\to\\infty\\,.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nLet $h(\\ell, \\eta) = C_5\\mathrm{e}^{1000\\zeta_2\/\\sqrt{\\eta}}\\sqrt{\\ell^7 \/ \\eta^3}$. By \nLemma~\\ref{lemma_crucial3} and \\eqref{expected_edge}, we assume without loss of generality that \n$$|N^*_\\ell - \\mathbb{E}N^*_{\\ell}| \\leq g_{\\ell,\\eta}(n)\\mathbb{E}N^*_{\\ell} \\mbox{ and }\n N_\\ell' \\leq (1 + o(1))h(\\ell, \\eta)\\tfrac{(\\mathbb{E}N^*_\\ell)^2}{n} (1 + g_{\\ell,\\eta}(n)),$$\nwhere $g_{\\ell, \\eta}(n)$ is defined as in Lemma~\\ref{lemma_crucial3}. Since $N_\\ell' = 2|\nE(\\mathcal{G}_n)| + |V(\\mathcal{G}_n)|$, by Lemma~\\ref{relation} we get that the graph \n$\\mathcal{G}_n$ has an independent subset of size at least\n$$ {N^*_\\ell}^2 \/ N_{\\ell}' \\geq n(1+o(1))\/h(\\ell,\\eta).$$\nTherefore, with high probability $|S_{n, \\eta, \\ell}| \\geq n\/2h(\\ell,\\eta)$ which leads to \n\\eqref{num_dis_good} for $C_6= 1\/2C_5$.\n\\end{proof}\n\n\\subsection{Connecting short light paths into a long one}\nWe set $\\zeta_1 = 1\/5$ and $\\zeta_2 = 1 + \\sqrt{2C^*\/\\mathrm{e}}$ in this subsection. Note that \nthis choice satisfies the conditions in Lemma~\\ref{lemma_crucial2}. Denote by $\\mathcal{E}_{n, \n\\eta, \\ell}$ the event $\\{|S_{n, \\eta, \\ell}| \\geq C_6f(\\ell, \\eta) n)\\}$.\\\\\nThe remaining part of our scheme is to connect a fraction of these disjoint good paths in a \nsuitable way to form a light and long path $\\gamma$. In order to describe our algorithm for the \nconstruction of $\\gamma$, we need a few more notations. Denote the vertex sets \n$V(\\mathcal{W}_n^*)$ and $V(\\mathcal{W}_n)\\setminus V(\\mathcal{W}_n^*)$ by $V_1$ and $V_2$ \nrespectively. Let $\\delta > 0$ be a number and $\\nu > 0$ be an integer satisfying \n\\begin{equation}\\label{algo_inequation}\n1 \\leq \\delta n \/\\ell \\leq |S_{n, \\eta, \\ell}|\\mbox{ and }\\delta n \\nu \/ \\ell \\leq |V_2|.\n\\end{equation}\nNow label the paths in $S_{n, \\eta, \\ell}$ as $\\pi_1, \\pi_2, \\ldots$ in some arbitrary way. Our \naim is to build up the path $\\gamma$ in step-by-step fashion starting from $\\pi_1$. In each step \nwe will connect $\\gamma$ to some $\\pi_j$ by a path of length 2 whose \\emph{middle} vertex is in \n$V_2$. These paths will be referred to as \\emph{bridges}. To leverage additional flexibility we \nalso demarcate two segments of length $\\lfloor \\ell\/4 \\rfloor$ one on each end of the paths \n$\\pi_j$'s which we call \\emph{end segments}.\nThese end segments will allow us to ``choose'' endpoints of $\\pi_j$'s while connecting them (as \nsuch, it is possible that we only keep half of the vertices of $\\pi_j$ in $\\gamma$). A vertex $v$ \nwill be said to be adjacent to a path or an edge if it is an endpoint of that path or edge. If an \nedge $e$ has exactly one endpoint in $S$, we denote that endpoint by $v_{e,S}$. The following \nalgorithm, referred to as $\\mathrm{BRIDGE}(\\nu, \\ell, \\delta)$, will construct a long path \n$\\gamma$. \nSee Figure \\ref{fig:fig1} for an illustration.\n\\smallskip\n\n\\textbf{Initialization.} $\\gamma = \\pi_1$, $T$ is the set of all vertices which are in end \nsegments of $\\pi_j$'s for $j \\geq 2$, $M = V_2$, $P = \\emptyset$ and designate an end segment \nof $\\gamma$ as the open end $\\gamma_O$. Also let $v$ be the endpoint of $\\gamma$ \\textbf{not} in \n$\\gamma_O$.\n\n\nNow repeat the following sequence of steps $\\lfloor \\delta n \/ \\ell \\rfloor - 1$ times:\n\n\n\n\\textbf{Step 1.} Repeat $\\nu$ times: find the lightest edge $e$ between $\\gamma_O$ and $M$, remove \n$v_{e,M}$ from $M$ and include it in $P$. These $\\nu$ edges will be called predecessor edges \n(so at the end of this step, $|P| = \\nu$).\n\n\\vspace{0.05 cm\n\n\\textbf{Step 2.} Find the lightest edge between $P$ and $T$. Call it $e'$. Then $v_{e',T}$ \ncomes from an end segment of some path in $S_{n, \\eta, \\ell}$, say $\\pi$.\n\n\\vspace{0.05 cm}\n\n\\textbf{Step 3.} The edge $e'$ and the \\emph{unique} predecessor edge adjacent to $v_{e',P}$ \ndefines a path $b$ of length 2 (so $b$ connects a vertex in $\\gamma_O$ to a vertex in $\\pi$). Let \n$w$ be the endpoint of $\\pi$ not in the end segment that $v_{e',T}$ came from. Then there is a \nunique path $\\gamma'$ in the tree $\\gamma \\cup b \\cup \\pi$ between $v$ and $w$. Set $\\gamma = \n\\gamma'$ and $\\gamma_O =$ the end segment of $\\pi$ containing $w$.\n\n\\vspace{0.05 cm}\n\n\\textbf{Step 4.} Remove the vertices on the end segments of $\\pi$ from $T$ and reset $P$ at \n$\\emptyset$.\\newline \n\\begin{figure}[!ht]\n \\centering\\includegraphics[width=0.8\\textwidth]{algorithm.png}\n \\caption{{\\bf Illustrating an iteration of BRIDGE for $\\nu = 2$ and $\\ell = 4$.} The edges $e'$ \n and $e''$ define the path $b$. So in this iteration the paths $\\gamma$ and $\\pi$ are shortened \n slightly before being joined via $b$.}\n \\label{fig:fig1}\n\\end{figure}\n\\newline\nNotice that the conditions in \\eqref{algo_inequation} ensure that we never run out of vertices in \n$T$ or $M$ during first $\\lfloor \\delta n \/ \\ell \\rfloor - 1$ iterations of steps 1 to 4. Thus \nwhat we described above is a valid algorithm for such choices of $\\delta$ and $\\nu$. Denote the \nlength and average weight of the path $\\gamma$ generated by $\\mathrm{BRIDGE}(\\nu, \\ell, \\delta)$ \nas $L_{\\text{BRIDGE}}(\\nu, \\ell, \\delta)$ and $A_{\\text{BRIDGE}}(\\nu, \\ell, \\delta)$ respectively \nwhen $\\delta, \\nu, \\ell$ satisfy these inequalities. For sake of completeness we may define these \nquantities to be $0$ and $\\infty$ respectively and regard the output path $\\gamma$ as ``empty'' if \nany one of the inequalities in \\eqref{algo_inequation} fails to hold. We are now just one lemma \nshort of proving the lower bound in \\eqref{main_prop}.\n\\begin{lemma}\n\\label{algo}\nFor any $0 < \\eta < \\eta_2$ where $\\eta_2 > 0$ is an absolute constant there exist positive \nintegers $\\nu = \\nu(\\eta)$, $\\ell = \\ell(\\eta) \\geq \\zeta_2^2 \/ \\eta$ and a positive number \n$\\delta = \\delta(\\eta)$ such that\n$$\\mathbb{P}\\big(L_{\\text{BRIDGE}}(\\nu, \\ell, \\delta) \\geq \\mathrm{e}^{-C_{7}\/\\sqrt{\\eta}} n\\mbox{ \nand }A_{\\text{BRIDGE}}(\\nu, \\ell, \\delta) \\leq 1\/\\mathrm{e} + 12\\eta \\mid \\mathcal{E}_{n, \\eta, \n\\ell}\\big) \\to 1$$\nas $n$ tends to infinity. Here $C_{7}>0$ is an absolute constant.\n\\end{lemma}\n\\begin{proof}\nWe will omit the phrase ``conditioned on $\\mathcal{E}_{n, \\eta, \\ell}$'' while talking about \nprobabilities in this proof (barring formal expressions) although that is to be implicitly \nassumed. We use $\\mathrm{Exp}(1 \/ \\theta)$ to denote the distribution of an exponential random \nvariable with mean $\\theta > 0$. Define $\\mathcal B_{n, \\eta, \\nu, \\ell, \\delta}$ to be the \nevent that the total weight of bridges does not exceed $3\\ell\\eta \\times \\lfloor \\delta n\/\\ell \n\\rfloor$. Notice that if any one of the inequalities in \\eqref{algo_inequation} does not hold, \n$\\gamma$ is ``empty'' and hence $\\mathcal B_{n, \\eta, \\nu, \\ell, \\delta}$ is a sure event. Suppose \n$\\delta, \\nu$ and $\\ell$ are such that \\eqref{algo_inequation} is satisfied. We will first bound \nthe average weight $A(\\gamma)$ of $\\gamma$ assuming that $\\mathcal B_{n, \\eta, \\nu, \\ell, \\delta}$ \noccurs. Let $\\ell_i$ be the length of the segment selected by the algorithm in the $i$-th \niteration. We see that its weight can be no more than $\\lambda \\ell_i + 2\\zeta_2 \/ \\sqrt{\\eta}$, \nsince the segment is chosen from a good path of average weight at most $\\lambda$ and maximum \ndeviation from its linear interpolation is at most $\\zeta_2 \/ \\sqrt{\\eta}$ (see \\eqref{good_event} \nas well as the proof for Lemma~\\ref{conditional_prob}). Thus the total weight of edges in $\\gamma$ \nfrom the good paths is bounded by $\\lambda L + \\lfloor \\delta n \/ \\ell \\rfloor.(2\\zeta_2 \/ \n\\sqrt{\\eta})$ where $L = \\sum_{i}\\ell_i$. Adding this to the total weight of bridges we get with \nprobability tending to 1 as $n\\to \\infty$\n\\begin{equation*}\nW(\\gamma) \\leq L( 1 \/ \\mathrm{e} + \\eta) + \\lfloor \\delta n \/ \\ell \\rfloor \\cdot (2\\zeta_2 \/ \n\\sqrt{\\eta}) + \\lfloor \\delta n\/\\ell \\rfloor3\\ell\\eta.\n\\end{equation*}\nSince the algorithm selects at least $\\ell \/ 2$ edges from each of the $\\lfloor \\delta n \/ \\ell \n\\rfloor$ good paths it connects, we have $\\ell_i\\geq \\ell\/2$ for each $i$ and thus $L \\geq \\lfloor \n\\delta n \/ \\ell \\rfloor \\times \\ell\/2$. Therefore, \n\\begin{eqnarray*}\nA(\\gamma)\n &\\leq & 1\/\\mathrm{e} + \\eta + \\lfloor \\delta n \/ \\ell \\rfloor.(2\\zeta_2 \/ L\\sqrt{\\eta}) + \n \\lfloor \\delta n\/\\ell \\rfloor3\\ell\\eta \/ L \n \\\\\n &\\leq & 1 \/ \\mathrm{e} + \\eta + 4\\zeta_2 \/ \\ell\\sqrt{\\eta} + 6\\eta\\,.\n\\end{eqnarray*}\nIf $\\ell\\geq \\zeta_2 \/ \\eta^{3\/2}$, then from the last display we can conclude $A(\\gamma) \\leq 1 \/ \n\\mathrm{e} + 12\\eta$. We can assume this restriction on $\\ell$ for now. Indeed, later we will \nspecify the value of $\\ell$ and it will satisfy the condition $\\ell\\geq \\zeta_2 \/ \\eta^{3\/2}$.\\par\n\n\nSo it remains to find positive numbers $\\delta, \\nu, \\ell$ as functions of $\\eta$ and an absolute \nconstant $\\eta_2 > 0$ such that the following three hold for all $0 < \\eta < \\eta_2$: (a) \n$\\mathbb{P}(\\mathcal B_{n, \\eta, \\nu, \\ell, \\delta} \\mid \\mathcal E_{n, \\eta, \\ell}) \\rightarrow \n1$ as $n \\to \\infty$, (b) $\\ell \\geq \\zeta_2 \/ \\eta^{3\/2} \\vee \\zeta_2^2 \/ \\eta$ (see the \nstatement of the lemma as well as the last paragraph) and (c) $\\gamma$ has the desired length. In \nthe next paragraph we will find a triplet $(\\delta, \\nu, \\ell)$ and an absolute constant $\\eta_2'' \n> 0$ such that (a) holds for $0 < \\eta < \\eta_2''$. In the final paragraph we will show that our \nchoice of $(\\delta, \\nu, \\ell)$ also satisfies (b) and (c) whenever $0 < \\eta < \\eta_2$ where \n$\\eta_2 < \\eta_2''$ is an absolute constant.\\par\n\nLet us begin with the crucial observation that, at the start of each iteration the edges between \n$M$ and $\\gamma_O$ are still unexplored. The same is true for the edges between $P$ and $T$ at the \nend of Step 1 in any iteration. Consequently their weights are i.i.d.\\ $\\mathrm{Exp}(1\/n)$ \nregardless of the outcomes from the previous iterations. Therefore, all the bridge weights are \nindependent of each other. Now suppose the mean and variance of each bridge weight can be bounded \nabove by $2\\ell\\eta$ and $\\sigma^2$ respectively and we emphasize that the latter does not depend \non $n$. By Markov's inequality it would then follow that $\\lim_{n \\to \\infty}\\mathbb{P}(\\mathcal \nB_{n, \\eta, \\nu, \\ell, \\delta} \\mid \\mathcal E_{n, \\eta, \\ell}) = 1$. To that end let us consider \nthe bridge obtained from the $m$-th iteration where $1\\leq m\\leq \\lfloor \\delta n\/\\ell \\rfloor - \n1$. Note that here we implicitly assume \\eqref{algo_inequation}, but this would be shortly shown \nto be implied by some other constraints involving $\\delta, \\nu$ and $\\ell$. Let $e'$ be the \nlightest edge between $P$, $T$ in Step 2 and $e$ be the predecessor edge adjacent to $e'$ (for \nthis iteration). So the bridge weight is simply $W_{e'} + W_e$. By discussions on independence at \nthe beginning of the proof, it follows that $W_{e'}$ and $W_e$ are independent of each other and \nalso of the weights of bridges already chosen. Since these weights are minima of some collections \nof i.i.d.\\ exponentials, they will be of small magnitude provided that we are minimizing over a \nlarge collection of exponentials, i.e., $|T|$, $|M|$ and $\\nu$ are big. It follows from the \ndescription of the algorithm that at each iteration we lose $2\\lfloor \\ell\/4\\rfloor$ many vertices \nfrom $T$ and $\\nu$ many vertices from $M$. By simple arithmetic we then get,\n\\begin{equation}\\label{eq-size-P1P2}\n|T| \\geq C_6\\lfloor\\ell \/4\\rfloor f(\\ell,\\eta)n \\mbox{ and }|M| \\geq \\zeta_1 \\eta n\/ 2\\,,\n\\end{equation}\nfor all $1 \\leq m \\leq \\lfloor \\delta n \/ \\ell\\rfloor - 1$ provided\n\\begin{equation}\n\\label{eq-delta-zeta-0}\n\\delta \\leq C_6 f(\\ell,\\eta)\\ell \/ 2 \\mbox{ and } \\nu \\delta \/ \\ell \\leq \\zeta_1 \n\\eta \/ 2\\,.\n\\end{equation}\nNotice that these inequalities automatically imply $\\delta n \/ \\ell \\leq C_6 f(\\ell, \\eta)n$ and \n$\\delta n \\nu \/ \\ell \\leq |V_2|$. Thus if $\\delta, \\nu, \\ell$ satisfy \\eqref{mean_bnd2}, \n\\eqref{algo_inequation} would also be satisfied for all large $n$ (given $\\delta, \\ell$). Assume \nfor now that \\eqref{eq-delta-zeta-0} holds. Since $W_{e'}$ is minimum of $\\nu \\times |T|$ many \nindependent $\\mathrm{Exp}(1\/n)$ random variables, it is distributed as $\\mathrm{Exp}(\\nu|T| \/ n)$. \nAs for $W_{e}$, it is bounded by the maximum weight of the $\\nu$ predecessor edges. From \nproperties of exponential distributions and description of the algorithm it is not difficult to \nsee that this maximum weight is distributed as $E_1 + E_2 + \\ldots E_\\nu$, where $E_{i+1}$ is \nexponential with rate $(|M| - i)\\times 1 \/ n\\times \\lfloor \\ell \/ 4\\rfloor$. By \\eqref{eq-size-P1P2}, we can then bound the expected weight of the bridge from above by\n\\begin{eqnarray}\\label{mean_bnd1}\n\\tfrac{1}{C_6\\lfloor\\ell \/4\\rfloor f(\\ell,\\eta)n} \\times \\tfrac{1}{\\nu} \\times n\\mbox{ }+\\mbox{ \n}\\tfrac{\\nu}{\\big(\\frac{\\zeta_1\\eta}{2} - \\tfrac{\\nu}{n}\\big)\\lfloor\\ell \/4\\rfloor} \n\\leq \\tfrac{5}{C_6\\nu\\ell f(\\ell,\\eta)} + \\tfrac{11\\nu}{\\zeta_1\\eta \\ell},\n\\end{eqnarray}\nwhere the last inequality holds for $\\ell \\geq 20$ and large $n$ (given $\\eta$, $\\nu$).\nBy the same line of arguments, we get that the its variance is bounded by a number that depends \nonly on $\\eta$, $\\ell$ and $\\nu$ (so in particular independent of $n$). To make the right hand \nside of \\eqref{mean_bnd1} bounded above by $2\\ell\\eta$, we may require each of the summands in \n\\eqref{mean_bnd1} to be bounded by $\\ell\\eta$. After a little simplification this amounts to\n\\begin{equation}\n\\nu \\geq 5 \/ C_6\\ell^2\\eta f(\\ell, \\eta) \\,, \\mbox{ and } \\zeta_1 (\\ell\\eta)^2 \\geq 11 \\nu\\,.\n\\label{mean_bnd2}\n\\end{equation}\nSo we need to pick a positive $\\delta = \\delta(\\eta)$, positive integers $\\nu = \\nu(\\eta), \\ell = \n\\ell(\\eta)$ and an absolute constant $\\eta_2'' > 0$ such that \\eqref{eq-delta-zeta-0} and \n\\eqref{mean_bnd2} hold for $0 < \\eta < \\eta_2''$. We will deal with \\eqref{mean_bnd2} first which \nis in fact equivalent to\n\\begin{equation}\n\\label{ineq_solve_1}\n\\zeta_1(\\ell \\eta)^2 \/ 11 \\geq \\nu \\geq 5 \/ C_6\\ell^2f(\\ell, \\eta)\\,.\n\\end{equation}\nLet us try to find an integer $\\ell$ satsfying $\\zeta_1(\\ell \\eta)^2 \/ 11 \\geq \\big(10 \/ \nC_6\\ell^2f(\\ell, \\eta)\\big)\\vee 2$ since this will ensure the existence of a positive integer \n$\\nu$ such that $\\nu, \\ell$ satisfy \\eqref{ineq_solve_1}. Using $f(\\ell, \\eta) = \n\\mathrm{e}^{-1000\\zeta_2\/ \\sqrt{\\eta}}\\sqrt{\\eta^3 \/ \\ell^7}$, we get that this amounts to\n$$\\ell \\geq \\tfrac{C_7'\\mathrm{e}^{2000\\zeta_2 \/ \\sqrt{\\eta}}}{\\eta^9} \\vee \\tfrac{C_7''}{\\eta}\\,,\n$$\nfor some positive, absolute constants $C_7'$ and $C_7''$. Hence there exists an absolute constant \n$\\eta_2''' > 0$ such that the integers $\\ell = \\lceil \\mathrm{e}^{2001\\zeta_2 \/ \\sqrt{\\eta}} \n\\rceil$ and $\\nu = \\lfloor \\zeta_1(\\ell \\eta)^2\/ 11 \\rfloor$ satisfy \\eqref{ineq_solve_1} whenever \n$0 < \\eta < \\eta_2'''$. Now we need to find $\\delta$ that would satisfy \\eqref{eq-delta-zeta-0} \nwhich can be rewritten as,\n\n\\begin{equation}\n\\label{ineq_solve_2}\n\\delta \\leq (C_6 f(\\ell, \\eta)\\ell \/ 2) \\wedge (\\zeta_1\\eta\\ell\/ 2\\nu)\\,.\n\\end{equation}\nAgain substituting $f(\\ell, \\eta) = \\mathrm{e}^{-1000\\zeta_2\/ \\sqrt{\\eta}}\\sqrt{\\eta^3 \/ \\ell^7}$, \nwe can simplify \\eqref{ineq_solve_2} to\n\\begin{equation}\n\\label{ineq_solve_3}\n\\delta \\leq (C_6 \\mathrm{e}^{-1000\\zeta_2\/\\sqrt{\\eta}}\\tfrac{\\eta^{3\/2}}{2\\ell^{5\/2}}) \\wedge \n(\\zeta_1\\eta\\ell\/ 2\\nu)\\,.\n\\end{equation}\nSince $\\nu = \\lfloor \\zeta_1(\\ell \\eta)^2\/ 11 \\rfloor$, \\eqref{ineq_solve_3} would be satisfied if\n$$\\delta \\leq (C_6 \\mathrm{e}^{-1000\\zeta_2\/\\sqrt{\\eta}}\\tfrac{\\eta^{3\/2}}{2\\ell^{5\/2}}) \\wedge \n(11 \/ 2\\ell \\eta)\\,.$$\nThe last display together with our particular choice of $\\ell$ i.e. $\\lceil \n\\mathrm{e}^{2001\\zeta_2 \/ \\sqrt{\\eta}} \\rceil$ imply that there exists a positive, absolute \nconstant $\\eta_2'' < \\eta_2'''$ such that $\\delta = \\mathrm{e}^{-7000\\zeta_2 \/ \\sqrt{\\eta}}$ \nsatisfies \\eqref{ineq_solve_2} for $0 < \\eta < \\eta_2''$. Thus our choice of the triplet $(\\delta, \n\\nu, \\ell)$ satisfies \\eqref{eq-delta-zeta-0} and \\eqref{mean_bnd2} for $0 < \\eta < \\eta_2''$ and \nconsequently the event $\\mathcal B_{n, \\eta, \\nu, \\ell, \\delta}$ occurs with high probability for \nthis choice.\\par\n\nAs to the constraint on $\\ell$, it is also clear that there exists a positive, absolute constant \n$\\eta_2' < \\eta_2''$ such that $\\ell = \\lceil \\mathrm{e}^{2001\\zeta_2 \/ \\sqrt{\\eta}} \\rceil$ is \nlarger than $\\zeta_2 \/ \\eta^{3\/2} \\vee \\zeta_2^2 \/ \\eta$ for all $0 < \\eta < \\eta_2'$. Finally it \nis left to ensure whether $\\gamma$ has the length required by the lemma. Since our particular \nchoice of the triplet $(\\delta, \\nu, \\ell)$ satisfies \\eqref{algo_inequation} for large $n$ (given \n$\\eta$), we have that $L_{\\text{BRIDGE}}(\\nu, \\ell, \\delta) \\geq \\lfloor \\delta n \/ \\ell \\rfloor \n\\times \\ell\/2$. It then follows that there exists a positive, absolute constant $\\eta_2 < \\eta_2'$ \nsuch that $L_{\\text{BRIDGE}}(\\nu, \\ell, \\delta) \\geq \\mathrm{e}^{-7001\\zeta_2\/ \\sqrt{\\eta}}n$ for \nthese particular choices of $\\nu, \\ell$ and $\\delta$ whenever $0 < \\eta < \\eta_2$ and $n$ is large \n(given $\\eta$). This completes the proof of the lemma.\n\n\\end{proof}\n\nCombining Lemmas \\ref{lemma_crucial2} and \\ref{algo} completes the proof of the lower bound in \nTheorem~\\ref{Prop}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n Homotopy theory, originally developed in the realm of topology, is now used in most domains of mathematics. One can cite algebraic topology, $\\infty$-algebras, operads, higher category theory, mathematical physics or derived geometry as examples. This tendency is in part due to the success of the concept of {\\it closed model category} introduced by Quillen in \\cite[Chapter 1]{Quillen}. Indeed, a model structure on a category gives access to topological intuition and a collection of important results.\n\nThe aim of this article is to illustrate how homotopy theory can be used to get a better understanding of {\\it singular foliations}. Our motivation comes from recent results of Lavau, Laurent-Gengoux and Strobl \\cite{Lavau} on singular foliations which are reminiscent of classical results in homotopy theory. We show that these results can actually be deduced form a {\\it left semi-model} structure on the category $L_\\infty\/A$ of {\\it $L_\\infty$-algebroids}.\n\nWe first recall the results of \\cite{Lavau} about resolutions of singular foliations and state analogous statements in the language of model categories. We then outline the structure of the paper.\n\n\\mypara Foliations are classical objects of study in mathematics. Examples arise in PDEs, Poisson geometry, Lie groups or differential geometry to cite a few. Roughly speaking, a {\\it foliation} $\\mathcal{F}$ consists in a partition of a smooth manifold $M$ into a collection of sub-manifolds called the {\\it leaves} of $\\mathcal{F}$. The collection $D(\\mathcal{F})$ of tangent spaces to the leaves of $\\mathcal{F}$ is called the {\\it distribution} associated to the foliation $\\mathcal{F}$. The foliation is called {\\it regular} when $D(\\mathcal{F})$ forms a vector bundle. As the leaves are submanifolds, the module $\\Gamma (D(\\mathcal{F}))$ of sections of this bundle is involutive under the bracket of vector fields. However, in the general case, i.e. when the tangent spaces to the leaves do not form a vector bundle, one can still consider the modules of vectors fields tangent to the leaves. {\\it Singular foliations} are therefore defined in \\cite{An-Sk09}\nas locally finitely generated submodules of the module of vector fields, closed under the bracket.\n\nContrary to regular foliations, which are by now well understood, singular foliations are still mysterious objects. A promising approach has been proposed in \\cite{Lavau} to understand a given singular foliation $\\mathcal{F}$. The idea is to associate to $\\mathcal{F}$ a {\\it Lie $\\infty$-algebroid} $Q\\mathcal{F}$ which is equivalent to $\\mathcal{F}$ \\cite[Theorem 1.6]{Lavau}. This $Q\\mathcal{F}$ is called the {\\it universal Lie $\\infty$-algebroid over $\\mathcal{F}$}. The use of the adjective {\\it universal} is justified by \\cite[Theorem 1.8]{Lavau} which states that two such universal Lie $\\infty$-algebroids over $\\mathcal{F}$ are quasi-isomorphic, and that such a quasi-isomorphism is essentially unique (up to homotopy).\n\n\\mypara These results should immediately ring a bell to a reader familiar with model categories. For convenience of the reader who is not, let us briefly explain why. Model categories were introduced in order to write down a minimal set of axioms enabling to perform constructions familiar in homological algebra (projective resolutions, derived functors) and in homotopy theory (homotopy relation for maps, fibrations, cofibrations, homotopy category). \n\n\nAmong these constructions, the {\\it cofibrant replacement}, which subsumes the notion of projective resolution, is central. The idea is, given $\\mathcal{F}$, to find another object $Q\\mathcal{F}$ which is better behaved (i.e. is cofibrant) and remains equivalent to $\\mathcal{F}$.\nThe proper way to formulate these properties is to introduce three classes of maps, $\\mathcal{W}$, $Cof$ and $Fib$, respectivelly called {\\it weak equivalences}, {\\it cofibrations} and {\\it fibrations}. $Q\\mathcal{F}$ is {\\it weak equivalent} to $\\mathcal{F}$ means that there exists a map in $\\mathcal{W}$ between $Q\\mathcal{F}$ and $\\mathcal{F}$. $Q\\mathcal{F}$ is {\\it cofibrant} means that the map $0\\hookrightarrow Q\\mathcal{F}$ from the initial object is a cofibration.\n\n\nCofibrant replacements are important to derive functors. The {\\it left derived functor $D(F)$} of a functor $F$ is defined by $D(F)(\\mathcal{F}):=F(Q\\mathcal{F})$. A non trivial part of the theory is to ensure that this derived functor is well defined, i.e. does not depend on the choice of the cofibrant replacement. This is relevant for our purpose since key lemmas involved in proving this fact are the exact analogues of the results of \\cite{Lavau} we are interested in (see proposition \\ref{machinerysemimodel}).\n\n\\mypara The natural strategy would therefore be to search for a model structure on the category of Lie $\\infty$-algebroids. However, with the definition of $\\cite{Lavau}$ (in terms of complexes of {\\it vector bundles}), not every singular foliation admits a cofibrant replacement (section 1.1 of $\\cite{Lavau}$). We therefore need to allow ourselves to consider what we call {\\it $L_\\infty$-algebroids} in definition \\ref{holiea}, i.e. a complex $L$ of {\\it A-modules} equipped with an $L_\\infty$-structure compatible with the A-module structure. The nuance between Lie $\\infty$-algebroids and $L_\\infty$-algebroids is that for a Lie $\\infty$-algebroid, the module $L$ consists in sections of a differential graded vector bundle. \n\nHowever, as remarked by Nuiten in \\cite[Example 3.1.12]{Nuiten}, even in the more general setting of $L_\\infty$-algebroids, an axiom of the definition of model category fails to be satisfied. Therefore, Nuiten has instead equipped the category $L_\\infty\/A$ of $L_\\infty$ algebroids with a {\\it semi-model category}, a relaxed version of the concept of model category (see definition \\ref{semi-model}).\n\nWe can still apply this machinery to singular foliations, but in order to do so, we need the analogues of \\cite[Lemma 5.1]{DS} and \\cite[Lemma 4.9]{DS} for semi-model categories. This is why we start section \\ref{tools} by stating the proposition \\ref{machinerysemimodel}, whose proof is essentially the same as for model categories. The only \"work\" consists in checking that the axioms involved in the classical case are still valid for a semi-model category. In order to facilitate this task for the reader, we recall the\noutline of the classical proof in the appendix.\n\nWe then give in section \\ref{semi} details about Nuiten's construction. After recalling in subsection \\ref{rapleft} the precise statement and the relevant definitions, we recall the lemma \\ref{lemmatransfer} which enables to transport a semi-model structure via an adjunction. The next step is to describe in the subsection \\ref{slice} the semi model structure on $Mod\/A$ that can be transported to $L_\\infty\/A$ via the Free\/Forget adjunction. We conclude this section by describing thoroughly in subsections \\ref{catholie} and \\ref{freeLR} the free functor involved in this adjunction. \n\nWith this at hand we are able to compare in section \\ref{cofibrantreplacement} the results of proposition \\ref{tools} applied to $L_\\infty\/A$ with the results of \\cite{Lavau} that we wish to recover. We conclude with in subsection \\ref{open} a list of open questions.\n\n\\paragraph{Aknowledgements} We would like to thank Fran\u00e7ois M\u00e9tayer for a discussion about filtered colimits and Chris Rogers for mentioning the reference \\cite{Nuiten}. A large part of this work was carried out at the Max Planck Institute for Mathematics in Bonn and we are grateful for the excellent resources and working conditions there.\n\n\\section{Main technical tool}\\label{tools}\n\nIn \\cite{Lavau}, the universal Lie $\\infty$-algebroids of a singular foliation \\cite[Definition 1.5]{Lavau} is proven to behave in the same manner as a cofibrant replacement in a left semi-model category would with respect to homotopies.\n\nThe relevant definitions of left semi-model category, cofibrant replacement, and left and right homotopies can be found in Definition~\\ref{semi-model}, Definition~\\ref{cofibrantrep}, and \\ref{lrhomotopy} respectively. The definition of left semi-model category is taken from \\cite[Section 4.4.1]{Fresse} and \\cite[Definition 1]{Spitzweck} and \\cite[Definition 1.5]{Barwick}.\n\nTo be more precise, \\cite[Theorem 1.6, 1.8, Proposition 1.22]{Lavau} are reminiscent respectively of Proposition~\\ref{machinerysemimodel}.\\ref{1}, .\\ref{5} and .\\ref{3} below:\n\n\\begin{prop}\\label{machinerysemimodel}\nLet $\\mathcal{C}$ be a left semi-model category. Suppose that $A$, $X$ and $Y$ are objects in $\\mathcal{C}$.\n\n\\begin{enumerate}\n\\item \\label{1} Every object in $\\mathcal{C}$ has a cofibrant replacement.\n\\item\\label{3}\nIf $A$ is cofibrant, then $\\overset{l}{\\sim}$ is an equivalence relation on $Hom_\\mathcal{C}(A,X)$. We denote the set of equivalence relations by $\\pi^l (A,X)$.\n\\item \\label{2}\nIf $A$ is cofibrant and $p\\colon Y\\to X$ is an acyclic fibration, then composition with $p$ induces a bijection:\n$$\np_\\ast\\colon \\pi^l (A,Y)\\to \\pi^l (A,X), \\ [f]\\mapsto [pf].\n$$\n\\item \\label{5} Any two cofibrant replacements of $A$ are weak equivalent and any two such weak equivalences between them are left homotopic.\n\\end{enumerate}\n\\end{prop}\n\nWe flesh out this analogy in Section \\ref{app} of this work by applying this proposition to $\\mathcal{C}=L_\\infty\/\\mathcal{O}_M$ (Definition~\\ref{holiea}), the category of $L_\\infty$-algebroids over a manifold $M$.\n\n\\begin{rem}\nIn what follows we stop writing ``left'' and simply write semi-model category for a left semi-model category.\n\\end{rem}\n\n\\section{A semi-model structure on $\\bm{L_\\infty\/A}$}\\label{semi}\nWe remind the reader that throughout this section we consider categories of differential non-negatively graded modules over some base field $\\mathbbm{k}$ of characteristic 0 and degree preserving morphisms unless otherwise stated.\n\nIn order to be able to apply Proposition~\\ref{machinerysemimodel} to the category of $L_\\infty$-algebroids, we need to equip it with a cofibrantly generated semi-model structure. In Section~\\ref{rapleft} we state Theorem~\\ref{transfer} \\cite{Nuiten}, which gives a semi-model structure on $L_\\infty\/A$, the category of $L_\\infty$-algebroids, from the model structure described in Section~\\ref{slice} on $Mod\/T_A$, the category of $dg\\text{-}A\\text{-}$modules over $T_A$. In Section~\\ref{catholie} we define the category $L_\\infty\/A$, which is the main focus of this work, and in Section~\\ref{freeLR} we describe the free functor $LR\\colon Mod\/T_A\\to L_\\infty\/A$ that allows to apply Lemma~\\ref{lemmatransfer}.\n\n\\subsection{Statement of the result}\\label{rapleft}\nIn this subsection we present Theorem~\\ref{transfer}, which allows us to use the machinery available in model categories for $L_\\infty\/A$ (Definition~\\ref{holiea}). This theorem is originally proved for $dg$-algebras and unbounded modules, but for our purposes we just need to state it for an algebra $A$ concentrated in degree 0 and non-negatively differential graded modules.\n\nWe start by stating the definition of semi-model category.\n\n\n\\begin{df}\\label{semi-model}\nA {\\bf semi-model category} is a bicomplete category $\\mathcal{C}$ with three subcategories $\\mathcal{W}$, $Fib$ and $Cof$, each containing all identity maps, such that:\n\\begin{enumerate}\n\\item[SM1)] $\\mathcal{W}$ has the 2-out-of-3 property. As well, $\\mathcal{W}$, $Fib$ and $Cofib$ are stable under retracts.\n\\item[SM2)] \\label{SM2} The elements of $Cofib$ have the left lifting property with respect to $\\mathcal{W}\\cap Fib$. The elements of $\\mathcal{W}\\cap Cofib$ with cofibrant domain have the left lifting property with respect to fibrations.\n\\item[SM3)]\\label{semifactorization} Every map can be factored functorially into a cofibration followed by a trivial fibration. Every map with cofibrant domain can be factored as a trivial cofibration followed by a fibration.\n\\item[SM4)] The fibrations and trivial fibrations are stable under transfinite composition, products, and base change.\n\\end{enumerate}\n\\end{df}\n\n\\begin{rem}\\label{semifrommod}\nThe definition of a semi-model category is a weakening of the notion of {\\bf closed model category} which needs in addition the following modifications to the axioms:\n\\begin{enumerate}\n \\item[SM2' \\& SM3'] The axioms SM2 and SM3 hold without assumption of cofibrancy of the domain.\n\\end{enumerate}\nOne does not require for a closed model category the axiom SM4 which follows from the other axioms.\n\\end{rem}\n\n\\begin{rem}\nIn particular, any model category is a semi-model category.\n\\end{rem}\n\n\\begin{thm}\\cite[Lemma 2.14, Theorem 3.1]{Nuiten}\\label{transfer}\nLet $A$ be an algebra. Then the category $L_\\infty\/A$ of $L_\\infty$-algebroids over $A$ admits a semi-model structure, in which a map is a weak equivalence (resp. a fibration) if and only if it is a quasi-isomorphism (surjection on positive degrees).\n\\end{thm}\n\nIn fact, this structure is cofibrantly generated \\cite[Definition 11.1.2]{HirschModel}, \\cite[Chapter 11]{HirschModel} and \\cite[Section 2.1]{Hovey}, which means that there is a set of maps generating the class of trivial fibrations through the right lifting property.\n\nTheorem~\\ref{transfer} is proven through the following version of the transfer lemma:\n\\begin{lem}[Transfer of semi-model structure]\\label{lemmatransfer}\nLet $F\\colon \\mathcal{M}\\leftrightarrow \\mathcal{N}\\cocolon G$ be an adjunction between locally presentable categories and suppose that $\\mathcal{M}$ carries a semi-model structure with sets of generating (trivial) cofibrations $I$ and $J$.\n\nDefine a map in $\\mathcal{N}$ to be a weak equivalence (fibration) if its image under $G$ is a weak equivalence (fibration) in $\\mathcal{M}$ and a cofibration if it has the left lifting property against the trivial fibrations. Assume that the following condition holds:\n\\begin{itemize}\n \\item Let $f\\colon A\\to B$ be a map in $\\mathcal{N}$ with cofibrant domain, obtained as a transfinite composition of pushouts f maps in $F(J)$. Then $f$ is a weak equivalence.\n\\end{itemize}\nThen the above classes of maps determine a tractable semi-model structure on $\\mathcal{N}$ whose generating (trivial) cofibrations are given by $F(I)$ and $F(J)$.\n\\end{lem}\n\nThe transfer takes the model structure on the category $Mod\/T_A$ (Definition~\\ref{anchored}) and gives a semi-model structure on $L_\\infty\/A$ through an adjunction\n\\begin{center}\n\\begin{tikzcd}\nL_\\infty\/A\n\\arrow[bend left=35]{r}[name=F]{F}\n& Mod\/T_A\n\\arrow[bend left=35]{l}[name=LR]{LR}\n\\end{tikzcd}\n\\end{center}\nThe existence of such adjunction is the content of Proposition~\\ref{freelr} below.\n\n\\begin{proof}[Proof of Theorem~\\ref{transfer}]\nThe constructions in \\cite{Nuiten} used to prove the unbounded case preserve the subcategory $dg\\text{-}A\\text{-}Mod^{\\geq0}$. Thus, the result follows in this case.\n\\end{proof}\n\n\n\\subsection{The model structure on $\\bm{Mod\/A}$}\\label{slice}\nThe objective of this subsection is to prove Proposition~\\ref{modinMODA}, where we give a model structure on the slice category $Mod\/T_A$, the category of anchored differential non-negatively graded modules over an algebra, which we describe as a slice category in Definition~\\ref{anchored}. We then state a general result, Theorem~\\ref{modelunder}. It enables to obtain the model structure on $Mod\/T_A$ from the model structure on $dg\\text{-}A\\text{-}Mod^{\\geq0}$ given in Theorem~\\ref{modinMOD}.\n\n\nRecall the classical notion of slice category :\n\n\\begin{df} Let $\\mathcal{M}$ be a category and $Z$ and object of $\\mathcal{M}$. The {\\bf slice category} $\\mathcal{M}\/Z$ has for objects arrows of $\\mathcal{M}$ the form $x\\rightarrow z$\nand for arrows commutative diagrams in $\\mathcal{M}$ of the form\n\\begin{center}\n\\begin{tikzpicture}[normal line\/.style={->},font=\\scriptsize]\n\\matrix (m) [matrix of math nodes, row sep=2em,\ncolumn sep=1.5em, text height=1.5ex, text depth=0.25ex]\n{ X& & Y \\\\\n & Z. & \\\\ };\n\n\\path[normal line]\n(m-1-1) edge (m-1-3)\nedge (m-2-2)\n(m-1-3) edge (m-2-2);\n\\end{tikzpicture}\n\\end{center}\n\\end{df}\n\nIn what follows, we denote by $T_A$ the $A$-module of derivations of $A$. We can look at $T_A$ as a graded $A$-module and if $A$ is concentrated in degree 0, so is $T_A$.\n\\begin{df} \\label{anchored}\nWe denote by $\\bm{Mod\/T_A}$ the slice category $dg\\text{-}A\\text{-}Mod^{\\geq0}\/T_A$. We refer to such an object $\\rho\\colon M\\to T_A$ as an {\\bf anchored module}.\n\\end{df}\n\n\n\n\\begin{prop}\\label{modinMODA}\nThe category $Mod\/T_A$ has a cofibrantly generated semi-model structure.\n\\end{prop}\n\nIn Remark~\\ref{semifrommod} we highlight the fact that a model structure is a particular example of a semi-model structure. So, we actually exhibit a stronger, i.e. a {\\it cofibrantly generated} model structure on $Mod\/T_A$\n\n\n\\begin{thm}\\cite[Theorem 7]{Hirsch}\n \\label{modelunder}\n Let $\\mathcal{M}$ be a cofibrantly generated model category (see\n \\cite[Definition~11.1.2]{HirschModel}) with generating cofibrations $I$ and\n generating trivial cofibration $J$, and let $Z$ be an object of\n $\\mathcal{M}$. If\n \\begin{enumerate}\n \\item $I_{Z}$ is the set of maps in $\\mathcal{M}\/{Z}$ of the form\n \\begin{center}\n \\begin{equation} \\label{morphabove}\n\\begin{tikzpicture}[normal line\/.style={->},font=\\scriptsize]\n\\matrix (m) [matrix of math nodes, row sep=2em,\ncolumn sep=1.5em, text height=1.5ex, text depth=0.25ex]\n{ X& & Y \\\\\n & Z & \\\\ };\n\n\\path[normal line]\n(m-1-1) edge (m-1-3)\nedge (m-2-2)\n(m-1-3) edge (m-2-2);\n\\end{tikzpicture}\n\\end{equation}\n\\end{center}\n in which the map $X \\to Y$ is an element of $I$ and\n \\item $J_{Z}$ is the set of maps in $\\mathcal{M}\/{Z}$ of the\n form \\eqref{morphabove} in which the map $X \\to Y$ is an element of\n $J$,\n \\end{enumerate}\n then the standard model category structure on $\\mathcal{M}\/{Z}$\n (in which a map \\eqref{morphabove} is a cofibration, fibration, or\n weak equivalence in $\\mathcal{M}\/{Z}$ if and only if the map $X\n \\to Y$ is, respectively, a cofibration, fibration, or weak\n equivalence in $\\mathcal{M}$) is cofibrantly generated, with generating\n cofibrations $I_{Z}$ and generating trivial cofibrations $J_{Z}$.\n\\end{thm}\n\nWe introduce the classes of cofibrations and trivial fibrations in $dg\\text{-}A\\text{-}Mod^{\\geq0}$:\n\nLet the ``disc\" $D_A(n)$ denote the chain complex $A[n]\\xrightarrow{Id_A} A[n-1]$, and the ``sphere\" $S_A(n)$ denote the chain complex $A[n]$ with trivial differential. Let $J$ be the set of morphisms of complexes of the form $0\\to D_A(n)$ and let $I$ be the set of inclusions $S_A(n-1)\\to D_A(n)$.\n\n\\begin{thm}\\cite[Theorem 2.3.11]{Hovey}\\label{modinMOD}\n$dg\\text{-}A\\text{-}Mod^{\\geq0}$ is a cofibrantly generated model category with $I$ as its generating set of cofibrations, $J$ as its generating set of trivial cofibrations, and quasi-isomorphisms as its weak equivalences. The fibrations are the surjections.\n\\end{thm}\n\n\\begin{proof}[Proof of Proposition~\\ref{modinMODA}]\nWe apply Theorem \\ref{modelunder} to the slice category $Mod\/T_A$ and the cofibrantly generated model structure on $dg\\text{-}A\\text{-}Mod^{\\geq0}$ given by Theorem~\\ref{modinMOD}.\n\\end{proof}\n\n\\subsection{The category $\\bm{L_\\infty\/A}$}\\label{catholie}\n\nIn this subsection we define the category of $L_\\infty$-algebroids associated to a graded algebra $A$. In broad terms, such an algebroid is given by an $L_\\infty$-algebra with a compatible $A$ module structure in the same spirit as a Lie algebroid in \\cite{Kapranov}. In the following we consider non-negatively graded dg-modules.\n\n\\begin{df}\\label{holiea}\n\\begin{enumerate}\n\\item An {\\bf $\\bm{L_\\infty}$-algebra} is a module $L$ with an $L_\\infty$-structure \\cite[Definition 2]{Vitagliano2} given by degree $k-2$ anti-symmetric multibrackets\n$$\n[\\ ]_k\\colon \\Lambda^k (L)\\to L,\n$$\nwith\n$$\n\\sum_{i+j=k}(-1)^{ij}\\sum_{\\sigma\\in S_{i,j}}\\chi(\\sigma, v)\\left[[v_{\\sigma(1)},\\ldots,v_{\\sigma(i)}]_i,v_{\\sigma(n+1)},\\ldots,v_{\\sigma(i+j)}\\right]_{j+1}=0.\n$$\nHere, $\\chi(\\sigma, v)$ is the sign for which $v_1\\wedge\\cdots \\wedge v_k=\\chi(\\sigma, v)v_{\\sigma(1)}\\wedge\\cdots\\wedge v_{\\sigma(k)}$.\n\nWe remind the reader that $\\bigwedge^k L$ is the quotient of the graded vector space $\\bigotimes^k L$ under the equivalence relation linearly generated by\n$$\nv_1\\otimes\\cdots\\otimes v_k=-(-1)^{\\lvert v_i\\rvert\\lvert v_{i+1}\\rvert}v_1\\otimes\\cdots\\otimes v_{i+1}\\otimes v_i\\otimes\\cdots\\otimes v_k\n$$\nfor all $i$. As well, $S_{i,j}$ is the group of $(i,j)$-unshuffles, which is to say, the set of permutations $\\sigma$ of $1,\\ldots, k=i+j$ such that\n$$\n\\sigma(1)<\\cdots<\\sigma(i) \\text{ and }\\sigma(i+1)<\\cdots<\\sigma(k).\n$$\n\n\\item An $L_\\infty$-algebra $L$ with an $A$-module structure (an {\\bf $\\bm{A\\text{-}L_\\infty}$-algebra}) is an {\\bf $\\bm{L_\\infty}$-algebroid} if it has a compatible action on $A$. The action is specified by an $A$-module morphism (which we call the anchor)\n$$\n\\rho\\colon L\\to T_A\n$$\nof degree $0$. The compatibility conditions for the anchor and brackets are given by:\n\\begin{gather*}\n[v_0,av_1]_2=(-1)^{\\lvert a\\rvert\\lvert v_0\\rvert}a[v_0,v_1]_2+\\rho(v_0)[a]v_1\\\\\n\\left[v_0,\\ldots,av_{k}\\right]_{k+1}=(-1)^{\\lvert a\\rvert \\left(k+\\lvert v_0\\rvert+\\cdots+\\lvert v_{k-1}\\rvert\\right)}a\\left[v_0,\\ldots,v_{k}\\right]_{k+1}\n\\end{gather*}\nfor $a$ in $A$ and $v_i\\in L$.\n\nWe denote by $\\bm{L_\\infty\/A}$ the category of $L_\\infty$-algebroids with morphisms given by the $A$-module morphisms that preserve the multibrackets.\n\\end{enumerate}\n\\end{df}\n\n\\begin{rem}\\label{remarkLR}\nSince the higher brackets of $T_A$ all vanish, the data of an $L_\\infty$-algebroid on an $A$-module $L$ is equivalent to that of an $L_\\infty$-algebra structure on $L$ where the higher brackets are $A$-linear on the first coordinate, together with a strict morphism \\cite[Definition 5]{Vitagliano2} of $L_\\infty$-algebras to $T_A$.\n\nAn $L_\\infty$-algebroid is a version up to homotopy of the Lie-Rinehart algebra defined in \\cite[Section 1]{Kapranov}. The higher brackets and the jacobiators give the structure up to homotopy.\n\\end{rem}\n\n\n\\subsection{The free functor $\\bm{LR}$}\\label{freeLR}\nIn Proposition~\\ref{free linf1} we describe a right adjoint for the forgetful functor\n$$\nF\\colon L_\\infty\/A\\to Mod\/T_A,\n$$\nwhich takes an $L_\\infty$-algebroid and gives back its underlying module. The existence of this functor is implicit throughout \\cite[Section 3]{Nuiten}, but here we give an explicit construction.\n\nWe start by presenting the $L_\\infty$-words on a fixed $M$ (not necessarily bounded) $A$-module ($A$ possibly differential graded): An {\\bf $\\bm{L_\\infty^A}$-word} of arity $k$ is recursively defined as an element of\n$$\nA\\otimes\\left(\\bigwedge^k M'\\right)[k-2],\n$$\nwhere $M'$ is an $A$-module of some previously defined $L_\\infty^A$-words. We represent one such word by a symbol\n$$\na\\left[v_0,\\ldots,v_{k-1}\\right]_k,\n$$\nwhere $a$ is an element in $A$, while the $v_i$ are all previously constructed $L_\\infty^A$-words. The degree of $a\\left[v_0,\\ldots,v_{k-1}\\right]_k$ is defined to be\n$$\ndeg\\left(a\\left[v_0,\\ldots,v_{k-1}\\right]_k\\right)=\\sum_i deg(v_i)+ \\lvert a\\rvert +k-2.\n$$\n\nFor non-negatively graded $M$ and non graded $A$, it is necessary to modify this definition. The recursive construction goes as follows: We start setting all the elements of $\\lambda_0=M$ as $L_\\infty^A$-words.\n\nWe can then form the symbols that involve elements of $\\lambda_0$. The set of all such symbols forms an $A$-module which can be expressed as\n$$\n\\lambda'_1\\coloneqq \\bigoplus_{k\\geq 1} A\\otimes \\left(\\bigwedge\\nolimits^k \\lambda_0\\right)[k-2]\/K_0.\n$$\nHere we take $\\bigwedge^1 \\lambda_{0}[1]$ to be $\\lambda_{0}[1]$ and $K_0$ to be the $A$-submodule of negatively graded elements. Thus, the resulting module is non-negatively graded. Including the elements of $\\lambda_0$ themselves we obtain\n$$\n\\lambda_1\\coloneqq \\lambda_0\\oplus \\lambda'_1.\n$$\n\n\nRecursively, once defined $\\lambda_n$, the $A$-module of all $L_\\infty^A$-words with at most $n$ brackets, we define\n$$\n\\lambda'_{n+1}\\coloneqq \\bigoplus_{k\\geq 1} A\\otimes \\left(\\bigwedge\\nolimits^k \\lambda_{n}\\right)[k-2]\/K_n,\n$$\nwhere again, $K_n$ is the $A$-submodule of negatively graded elements, and then\n$$\n\\lambda_{n+1}\\coloneqq \\lambda_0\\oplus \\lambda'_{n+1}.\n$$\nIt is clear from the construction that $\\lambda_n\\subset \\lambda_{n+1}$ for all $n$.\n\n\\begin{df}\nGiven an $A$-module $M$, we define the {\\bf set of $\\bm{L_\\infty^A}$-words on $\\bm{M}$} as the colimit of $A$-modules\n$$\nL(M)=colim\\ \\lambda_{i}.\n$$\n\\end{df}\n\nGiven an element $v\\in \\lambda_n$, we say that $w\\in \\lambda_{n+1}$ is {\\bf directly over} $v$ if there are $v_1,\\ldots,v_{k-1}\\in \\lambda_n$ and $a$ in $A$ such that $w=[a v,v_1,\\ldots,v_{k-1}]_k$. We say that {\\bf$\\bm{w}$ is over $\\bm{v}$} if there is a sequence $v=v_0,\\ldots,v_k=w$ such that $v_{i+1}$ is directly over $v_i$.\n\n\\begin{df}\nThe {\\bf free $\\bm{L_\\infty^A}$-algebra} associated to $M$ is the $A$-module given by\n$$\nL_\\infty^A(M)\\coloneqq L(M)\/\\sim,\n$$\nwhich is the quotient of the set of $L_\\infty^A$-words of $M$ under $\\sim$, which is the $A$-module generated by all the elements over the jacobiators\n$$\n\\sum_{i+j=k}(-1)^{ij}\\sum_{\\sigma\\in S_{i,j}}\\chi(\\sigma, v)\\left[[v_{\\sigma(1)},\\ldots,v_{\\sigma(i)}]_i,v_{\\sigma(n+1)},\\ldots,v_{\\sigma(i+j)}\\right]_{j+1}.\n$$\nwith $\\chi(\\sigma,v)$ the sign for which $v_1\\wedge\\cdots \\wedge v_k=\\chi(\\sigma, v)v_{\\sigma(1)}\\wedge\\cdots\\wedge v_{\\sigma(k)}$.\n\nThe $L_\\infty$-brackets on $L_\\infty^A(M)$ are defined in the obvious way: given $k+1$ elements of $L_\\infty^A(M)$ represented by $L_\\infty^A$ words $v_0,\\ldots,v_k$, their bracket is given by the class represented by $[v_0,\\ldots, v_k]_{k+1}$.\n\nThese brackets are well defined on $L_\\infty^A(M)$ and their jacobiators vanish from construction. \n\\end{df}\n\nIn the construction of $L_\\infty^A(M)$ we did not make use of the internal differential structure nor the anchors of the module $M$. These are considered below in the construction of the free $L_\\infty$-algebroid associated to $M$.\n\n\\begin{prop}\\label{free linf1}\nGiven a graded $A$-Module $M$, the construction $L_\\infty^A(M)$ is functorial and is a left adjoint to the forgetful functor from the category of $A$-modules with an $L_\\infty$-algebra structure and strict morphisms of $A$-modules, to the category of $A$-Modules.\n\\end{prop}\n\n\\begin{proof}\nGiven a morphism of $A$-modules $f\\colon M\\to N$, we give a morphism of $A$-modules $L_\\infty^A(f)\\colon L_\\infty^A(M)\\to L_\\infty^A(N)$ by defining it on the level of $L_\\infty^A$-words: we start by setting $L(f)m\\coloneqq f(m)$ for $m$ in $M$, and recursively define\n\\begin{equation}\\label{equation}\nL(f)a\\left[v_0,\\ldots, v_{k-1}\\right]_k\\coloneqq a\\left[L(f)v_0,\\ldots, L(f)v_{k-1}\\right]_k.\n\\end{equation}\nThis is clearly a morphisms of $A$-modules that preserves jacobiators. Therefore, it descends to the required morphism $L_\\infty^A(f)\\colon L_\\infty^A(M)\\to L_\\infty^A(N)$. Since it preserves the $L_\\infty$-brackets by construction, it is a strict $L_\\infty$-morphism.\n\nThe adjunction property can be verified using that $M$ is an $A$-submodule of $L_\\infty^A(M)$. Therefore, given any $A$-module, bracket preserving morphism ({\\bf$\\bm{A\\text{-}L_\\infty}$-morphism}) $F\\colon L_\\infty^A(M)\\to L$, the restriction to $M$ is an $A$-module morphism.\n\nAs well, given an $A\\text{-}L_\\infty$-algebra $L$, Equation~\\ref{equation} above defines the unique strict $L_\\infty$-extension to $L_\\infty^A(M)$ of a given $A$-module morphism $f\\colon M\\to L$. The verification that these two assignations follows from the construction of the $L_\\infty$-extensions.\n\\end{proof}\n\n\\begin{rem}\nProposition~\\ref{free linf1} shows that $L_\\infty^A(M)$ is a free object associated to $M$ in $A\\text{-}L_\\infty$-algebras.\n\\end{rem}\n\n\\begin{rem}\\label{anchors}\nIf $\\alpha\\colon M\\to T_A$ is an anchored module, so is $L_\\infty^A(M)$. In fact, we obtain an $A\\text{-}L_\\infty$-morphism $\\tilde{\\alpha}\\colon L_\\infty^A(M)\\to T_A$ from $\\alpha$ through the described adjunction\n\\begin{center}\n\\begin{tikzcd}\nA\\text{-}L_\\infty \\arrow[bend left=35]{r}[name=F]{F} & Mod \\arrow[bend left=35]{l}[name=L]{L}\n\\end{tikzcd}\n\\end{center}\nHere we consider the dg-Lie algebra $T_A$ as an $A$-module with the $L_\\infty$-structure coming from the Lie bracket.\n\\end{rem}\n\n\\begin{prop}\\label{freelr}\nLet $\\alpha\\colon M\\to T_A$ be in $Mod\/A$ with differential $d$. The $A$-module over $T_A$ given by the quotient\n$$\nLR(M)\\coloneqq L_\\infty^A(M)\/\\sim\n$$\nis an $L_\\infty$-algebroid. The anchor $\\rho\\colon LR(M)\\to T_A$ is the quotient function of the map $\\hat{\\alpha}\\colon L_\\infty^A(M)\\to T_A$ described in Remark~\\ref{anchors}.\n\nHere, $\\sim$ is the $A\\text{-}L_\\infty$-ideal generated by the relations\n\\begin{gather*}\n[v]_1=d(v)\\\\\n[v_0,av_1]_2=\na[v_0,v_1]_2+\\hat{\\alpha}(v_0)(a)v_1\\\\\n\\left[v_0,\\ldots,av_{k}\\right]_{k+1}=\na\\left[v_0,\\ldots,v_{k}\\right]_{k+1}\n\\end{gather*}\nfor $v$ in $M$, the $v_i$ in $L_\\infty^A(M)$, and $a$ in $A$.\n\nFinally, this construction is functorial and gives a right adjoint for the forgetful functor $F\\colon L_\\infty\/A\\to Mod\/A$.\n\\end{prop}\n\n\\begin{rem}\nAn {\\bf$\\bm{A\\text{-}L_\\infty}$-ideal} in an $A\\text{-}L_\\infty$-algebra $L$ generated by a set $S$ is the smallest $A$-submodule of $L$ that is an ideal for the $L_\\infty$-brackets.\n\\end{rem}\n\n\\begin{proof}\nFrom the definition of the quotient, the $L_\\infty$-brackets of $L_\\infty^A(M)$ descend to $L_\\infty$-brackets on $LR(M)$ and, since $T_A$ is already an $L_\\infty$-algebroid, so does the anchor.\n\nThe compatibility conditions for the brackets and anchor from Definition~\\ref{holiea} are exactly the quotient relations and so, $LR(M)$ is itself an $L_\\infty$-algebroid.\n\nGiven $\\alpha_M\\colon M\\to T_A$ and $\\alpha_N\\colon N\\to T_A$ and $f\\colon M\\to N$ in $Mod\/A$, from Proposition~\\ref{free linf1} we obtain a commutative diagram of $A\\text{-}L_\\infty$-algebras\n\\begin{center}\n \\begin{tikzcd}[column sep=small]\n L_\\infty^A(M)\n \\arrow[rr, \"L(f)\"]\n \\arrow[rd, swap, \"\\hat{\\alpha}_{M}\"]\n && L_\\infty^A(N)\n \\arrow[ld, \"\\hat{\\alpha}_{N}\"]\n \\\\\n & T_A\n \\end{tikzcd}\n\\end{center}\nThis induces a morphim $LR(f)\\colon LR(M)\\to LR(N)$ in $L_\\infty\/A$.\n\nThe proof that the functors $F\\colon L_\\infty\/A\\leftrightarrow Mod\/A\\cocolon LR$ are adjoint to each other is similar to that in the proof of Proposition~\\ref{free linf1}.\n\\end{proof}\n\n\\begin{df}\\label{freehlr}\nWe define the {\\bf free $\\bm{L_\\infty}$-algebroid} associated to $\\alpha\\colon M\\to T_A$ in $Mod\/A$ to be the $L_\\infty$-algebroid $LR(M)$.\n\\end{df}\n\n\n\\section{Applications to singular foliations}\\label{app}\nIn this section we propose results parallel to those of \\cite[Section 1]{Lavau} for Lie $\\infty$-algebroids in terms of the (semi-)model category structure on $L_\\infty\/A$ given by Theorem~\\ref{transfer}. We first show how singular foliations and Lie $\\infty$-algebroids \\cite{Lavau} are examples of $L_\\infty$ algebroids. With this connection in mind, we then compare each statement of \\cite{Lavau} with its semi-model category analogue that we introduce and prove as corollaries of our main technical lemma \\ref{machinerysemimodel}. We conclude with open questions raised by the discrepancies between the two sets of results.\n\n\\subsection{Universal Lie $\\bm{\\infty}$-algebroids}\nIn this subsection, in order to make the connection with the semi-model structure on $L_\\infty\/\\mathcal{O}_M$, we show that both Lie $\\infty$-algebroids and singular foliations can be seen as objects in $L_\\infty\/\\mathcal{O}_M$. We fix a smooth manifold $M$ and its ring of smooth functions $\\mathcal{O}_M$. Recall\n\n\\begin{df}\\cite[Definition 1.13]{Lavau}\nA {\\bf Lie $\\infty$-algebroid} is a non-positively graded differential vector bundle over a manifold $M$ whose space of sections has a strong homotopy Lie-Rinehart algebra structure. This is to say, an $LR_\\infty[1]$-algebra in the sense of \\cite[Definition 7]{Vitagliano2} over the algebra $\\mathcal{O}_M$ of smooth functions on $M$.\n\\end{df}\n\n\\begin{rem}\nIn \\cite{Lavau}, the definitions are given in terms of non-positive cohomological chain complexes. We can directly transport our results to this setting by using the isomorphim of this category to that of non-negative homological chain complexes.\n\\end{rem}\n\nThen one has \n\n\n\\begin{lem}\\label{inclusion}\nThe de-suspension of the space of sections of an $\\infty$-algebroid forms an $L_\\infty$-algebroid.\n\\end{lem}\n\n\\begin{proof}\n The compatibility conditions of the anchor and brackets in \\cite[Definition 1.13]{Lavau} are the higher Jacobi equations for the $LR_\\infty[1]$-algebra. In this case, since both $\\mathcal{O}_M$ and $T_{\\mathcal{O}_M}$ are concentrated in degree 0, the action of the algebroid on $\\mathcal{O}_M$ has no higher terms. Therefore, taking the $L_\\infty$-algebroid associated to this $LR_\\infty[1]$-algebra we obtain (after desuspension) an object in $L_\\infty\/\\mathcal{O}_M$.\n \\end{proof}\n\nWe can also express the notion of singular foliation in terms of $L_\\infty$-algebroid. First recall \n\n\\begin{df}\nA {\\bf singular foliation} over $M$ is a locally finitely generated $\\mathcal{O}_M$-submodule of $\\mathfrak{X}(M)$ that is closed under the Lie bracket.\n\\end{df}\n\n Since $T_{\\mathcal{O}_M}=\\mathfrak{X}(M)$ (with the $L_\\infty\/{\\mathcal{O}_M}$-structure given by the commutator of derivations and anchor the identity) is just $\\mathfrak{X}(M)$, one can restate the definition in terms of the $L_\\infty$-structure.\n\n\\begin{lem}\nA singular foliation $\\mathcal{F}$ over $M$ is a locally finitely generated $\\mathcal{O}_M$-module $\\mathcal{F}\\subsetT_{\\mathcal{O}_M}$ that is closed under the $L_\\infty\/{\\mathcal{O}_M}$-structure of $T_{\\mathcal{O}_M}$.\n\\end{lem}\n\n\\subsection{Cofibrant replacements in $\\bm{L_\\infty\/{\\mathcal{O}_M}}$}\\label{cofibrantreplacement}\n\nWe can now show that results similar to those of \\cite{Lavau} can be understood in terms of the model structure on $Mod\/T_{\\mathcal{O}_M}$ (see Proposition~\\ref{modinMODA}) and the semi-model structure on $L_\\infty\/\\mathcal{O}_M$ (see Theorem~\\ref{transfer}).\n\nWe first recall the definition of a resolution of a singular foliation \\cite[Definition 1.1]{Lavau}:\n\n\\begin{df}\nA {\\bf resolution of a singular foliation $\\mathcal{F}$} is a complex of vector bundles for which the cohomology of the associated complex of sections gives the $\\mathcal{O}_M$-module $\\mathcal{F}$.\n\\end{df}\n\nThe model structure on $Mod\/T_{\\mathcal{O}_M}$ given by Proposition~\\ref{modinMODA} is such that:\n\n\\begin{lem}\nA resolution of a singular foliation $\\mathcal{F}$ is a cofibrant replacement in $Mod\/T_{\\mathcal{O}_M}$.\n\\end{lem}\n\nWe remind the reader of the definition of cofibrant replacements in a (semi-) model structure.\n\n\\begin{df}\\label{cofibrantrep}\nA cofibrant replacement of an object $X$ in a (semi-) model category $\\mathcal{C}$ with initial object $0$ is an object $QX$ such that there is a factorization of the initial arrow $0\\to X$ as $0\\xrightarrow{i}QX\\xrightarrow{p} X$, with $i$ a cofibration and $p$ a trivial fibration.\n\\end{df}\n\nThe following remark is the main reason for us to work with $L_\\infty$-algebroids instead of Lie-$\\infty$ algebroids:\n\\begin{rem}\nResolutions of singular foliations do not always exist (see section 1.1 of $\\cite{Lavau}$), while the existence of a cofibrant replacement in $Mod\/\\mathcal{O}_M$ of a singular foliation is guaranted by the axiom $SM3$ of a semi-model category. \n\\end{rem}\n\nIf a resolution of $\\mathcal{F}$ exists, on can consider the question of equipping it with a Lie-$\\infty$ algebroid structure.\n\n\n\\begin{df}\\cite[Definition 1.5]{Lavau}\\label{universal}\nA resolution of $\\mathcal{F}$ is called a {\\bf universal Lie-$\\infty$ algebroid over $\\mathcal{F}$} if the differential on the sections can be extended to a homotopy Lie-Rinehart structure.\n\\end{df}\n\nOne of the main results of \\cite{Lavau} (Theorem 1.6) reads: \n\n\\begin{thm}\nIf $\\mathcal{F}$ admits a resolution, then this resolution can be upgraded to a universal Lie-$\\infty$ algebroid over $\\mathcal{F}$.\n\\end{thm}\n\n The analogous notion in terms of the semi-model category on $L_\\infty\/\\mathcal{O}_M$ is given by a cofibrant replacement. The existence of such cofibrant replacement is then a direct consequence of the axioms:\n\n\\begin{thm}\\label{replacement}\nFor any singular foliation $\\mathcal{F}$ there exists a replacement by a dg-$\\mathcal{O}_M$-module endowed with a structure of $L_\\infty$-algebroid over $\\mathcal{F}$.\n\\end{thm}\n\\begin{proof}\nIt suffices to apply Proposition~\\ref{machinerysemimodel}.\\ref{1} to the semi-model structure on $L_\\infty\/\\mathcal{O}_M$ given by Theorem~\\ref{transfer}.\n\\end{proof}\n\nThe terminology \"universal\" in definition \\ref{universal} is due to the following result (\\cite[Theorem 1.8]{Lavau}):\n\n\\begin{thm}\\label{liftinglavau}\nTwo universal Lie $\\infty$-algebroids over $\\mathcal{F}$ are quasi-isomorphic in the category of Lie $\\infty$-algebroids, and such a quasi-isomorphism is essentially unique (up to homotopy).\n\\end{thm}\n\nThis result has a direct analog in terms of the semi-model structure on $L_\\infty\/\\mathcal{O}_M$.\n\n\\begin{thm}\\label{lifting}\nLet $\\mathcal{F}$ be a singular foliation. Then, any two cofibrant replacements of $\\mathcal{F}$ in $L_\\infty\/T_{\\mathcal{O}_M}$ are quasi-isomorphic (with strict morphisms of $L_\\infty$-algebroids) and any two such isomorphisms are (left)-homotopic.\n\\end{thm}\n\n\\begin{proof}\nThis is proposition \\ref{machinerysemimodel}.\\ref{5}.\n\\end{proof}\n\nEven though Lemma~\\ref{inclusion} states that any universal Lie $\\infty$-algebroid over $\\mathcal{F}$ is a replacement of $\\mathcal{F}$ in $L_\\infty\/\\mathcal{O}_M$ (i.e. an element of $L_\\infty\/\\mathcal{O}_M$ weakly equivalent to $\\mathcal{F}$), it is a priori \\underline{not} a cofibrant replacement. In particular, we can not \u00e0 priori deduce Theorem~\\ref{liftinglavau} from Theorem~\\ref{lifting}. Moreover, Theorem \\ref{liftinglavau} and \\ref{lifting} differ in two other points: \n\n\\begin{rem}\\label{homodif}\n\\begin{itemize}\n \\item The morphisms in proposition \\ref{liftinglavau} are $L_\\infty$-morphisms, while the morphisms in \\ref{lifting} are strict morphisms of $L_\\infty$-algebras.\n\\item The notion of homotopy equivalence used in \\ref{liftinglavau} is very different from the one we are using which is based on Cylinder objects (see definition \\ref{lrhomotopy}.1).\n\\end{itemize}\n\\end{rem}\n\nThe semi-model category statement analogous to \\cite[Proposition 1.22]{Lavau} is the following theorem, which is a corollary of Proposition~\\ref{machinerysemimodel}.\\ref{5}.\n\n\\begin{thm}\nLeft homotopy of Lie $\\infty$-algebroid morphisms is an equivalence relation denoted by $\\overset{l}{\\sim}$, which is compatible with composition.\n\\end{thm}\n\nOnce again, the notion of homotopy equivalence used in the two statements do not coincide. The reason is that when one works in the realm of model categories, there are standard definitions of left and right homotopies between morphisms, namely Definition~\\ref{lrhomotopy}.\\ref{leftHomotopy} and \\ref{lrhomotopy}.\\ref{rightHomotopy} below.\n\n\\begin{df}\\label{lrhomotopy}\nLet $A$ and $X$ be objects in a semi-model category $\\mathcal{C}$.\n\\begin{enumerate}\n \\item A {\\bf cylinder object} for $A$ is an object $Cyl(A)$ together with a factorization\n \\begin{center}\n \\begin{tikzcd}\n A\\amalg A\n \\arrow[rr, bend right, \"id_{A}+ id_{A}\"']\n \\arrow[r, \"i_0+i_1\"]\n &Cyl(A)\n \\arrow[r, \"\\sim\"]\n &A\n \\end{tikzcd}\n \\end{center}\n where the map $Cyl(A)\\to A$ is a weak equivalence.\n \\item Similarly, a {\\bf path object} for $X$ is an object $Path(X)$ together with a factorization\n \\begin{center}\n \\begin{tikzcd}\n X\n \\arrow[rr, bend right, \"id_{A}\\times id_{A}\"']\n \\arrow[r, \"\\sim\"]\n &Path(X)\n \\arrow[r, \"p_0\\times p_1\"]\n &X\\times X\n \\end{tikzcd}\n \\end{center}\n where the morphism $X\\to Path(X)$ is a weak equivalence.\n \\item\\label{leftHomotopy}Two morphisms $f,g\\colon A\\to X$ are {\\bf left homotopic}, and we write {\\bf $f\\overset{l}{\\sim}g$}, if there is a cylinder object of $A$ and a map $H\\colon Cyl(A)\\to X$ such that $f+ g\\colon A\\amalg A\\to X$ factorizes as\n $$\n A\\amalg A\\to Cyl(A)\\xrightarrow{H}X\n $$\n \\item\\label{rightHomotopy} Analogously, $f,g\\colon A\\to X$ are {\\bf right homotopic}, and we write {\\bf $f\\overset{r}{\\sim}g$}, if there is a path object of $X$ and a map $H\\colon A\\to Path(X)$ such that $f\\times g\\colon A\\to X\\times X$ factorizes as\n $$\n A\\xrightarrow{H} Path(X)\\to X\\times X\n $$\n\\end{enumerate}\n\\end{df}\n\n\\begin{rem}\\label{remho}\n Because of the restrictions imposed by Axiom~\\emph{SM3}, it is natural to use cylinder objects for the notion of homotopy in $L_\\infty\/\\mathcal{O}_M$, i.e. to use left-homotopies. However, it is still possible to define right homotopies by a construction of path objects akin to the one found in \\cite[Section 2]{Vezzosi} which turns out to be a notion similar to the one used by \\cite{Lavau} (see \\cite[Remark 6]{Lavau}). But then, it is not clear to us how to prove a statement similar to proposition \\ref{machinerysemimodel}.4 but for right homotopies. \n\\end{rem}\n\n\\subsection{Open questions}\\label{open}\n\nThe ineffable original goal of this paper was to obtain the results of \\cite{Lavau} as genuine properties of a (semi)-model category on $L_\\infty\/T_{\\mathcal{O}_M}$. However, Remark~\\ref{homodif} implies that one needs to change the (semi)-model structure we consider if one wants to reach this goal. More precisely, this work triggers the following questions.\n\n\\begin{que}\\label{q1}\nDoes there exist a variant of a notion of model-category on $L_\\infty\/T_{\\mathcal{O}_M}$ for which the morphisms are not anymore necessarily strict?\n\\end{que}\n\nIf such a structure exists we can consider the following questions:\n\n\\begin{que}\\label{q2}\nIs any universal Lie $\\infty$-algebroid cofibrant? In other words, is the first part of Theorem~\\ref{liftinglavau} a corollary of Theorem~\\ref{lifting} (with non necessarily strict morphisms of $L_\\infty$-algebroids)?\n\\end{que}\n\n\\begin{que}\\label{q3}(Inspired by theorem \\ref{remho})\nDoes the notion of left homotopy coincide with the notion of homotopy considered in \\cite{Lavau}? In other words, is the second part of Theorem~\\ref{liftinglavau} a corollary of Theorem~\\ref{lifting}?\n\\end{que}\n\nQuestion \\ref{q3} can be divided into two.\n\n\\begin{que}\nDoes the equivalence relation given by left homotopy coincide with the one given by right homotopy? \n\\end{que}\n\n\\begin{que}\\label{q4}\nDoes the notion of right homotopy coincide with the definition of homotopy used in \\cite{Lavau}? \n\\end{que}\n\nAnother possible generalisation of a $L_\\infty$-algebroid is given by allowing the existence of higher anchors which form a homotopy morphism to $T_A$.\n\n\\begin{que}\\label{q5}\nIs there a (semi-)model structure in the category of $L_\\infty$-algebroids with homotopy anchors? How does it relate to the other model structures above?\n\\end{que}\n\nWe have already noted (Remark~\\ref{remarkLR}) that $L_\\infty$-algebroids are particular examples of Lie-Rinehart algebras up to homotopy. On the other hand, in \\cite[Section 5]{Kji}, Kjeseth studies the notion of a resolution of a Lie-Rinehart Pair. \n \n \n \\begin{que}\n Does there exist a variant of a notion of model-category on the category of homotopy Lie-Rinehart pairs for which the resolutions considered in \\cite[Definition 5.1]{Kji} are cofibrant replacements?\n \\end{que}\n\n\\begin{que}\nDoes it induce, when restricted to a fixed associative algebra A, the structure considered aimed at in question \\ref{q1}?\n\\end{que}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}